00:00:00.000 Started by upstream project "autotest-per-patch" build number 130921 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.067 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.067 The recommended git tool is: git 00:00:00.068 using credential 00000000-0000-0000-0000-000000000002 00:00:00.069 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.122 Fetching changes from the remote Git repository 00:00:00.124 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.198 Using shallow fetch with depth 1 00:00:00.198 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.198 > git --version # timeout=10 00:00:00.249 > git --version # 'git version 2.39.2' 00:00:00.249 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.281 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.281 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.445 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.458 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.472 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:06.472 > git config core.sparsecheckout # timeout=10 00:00:06.486 > git read-tree -mu HEAD # timeout=10 00:00:06.505 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:06.526 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:06.526 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:06.608 [Pipeline] Start of Pipeline 00:00:06.620 [Pipeline] library 00:00:06.621 Loading library shm_lib@master 00:00:06.622 Library shm_lib@master is cached. Copying from home. 00:00:06.637 [Pipeline] node 00:00:06.646 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.648 [Pipeline] { 00:00:06.658 [Pipeline] catchError 00:00:06.659 [Pipeline] { 00:00:06.672 [Pipeline] wrap 00:00:06.681 [Pipeline] { 00:00:06.688 [Pipeline] stage 00:00:06.690 [Pipeline] { (Prologue) 00:00:06.895 [Pipeline] sh 00:00:07.182 + logger -p user.info -t JENKINS-CI 00:00:07.201 [Pipeline] echo 00:00:07.202 Node: GP8 00:00:07.211 [Pipeline] sh 00:00:07.515 [Pipeline] setCustomBuildProperty 00:00:07.527 [Pipeline] echo 00:00:07.529 Cleanup processes 00:00:07.533 [Pipeline] sh 00:00:07.817 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.817 979034 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.829 [Pipeline] sh 00:00:08.115 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.115 ++ grep -v 'sudo pgrep' 00:00:08.115 ++ awk '{print $1}' 00:00:08.115 + sudo kill -9 00:00:08.115 + true 00:00:08.133 [Pipeline] cleanWs 00:00:08.144 [WS-CLEANUP] Deleting project workspace... 00:00:08.144 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.152 [WS-CLEANUP] done 00:00:08.156 [Pipeline] setCustomBuildProperty 00:00:08.172 [Pipeline] sh 00:00:08.459 + sudo git config --global --replace-all safe.directory '*' 00:00:08.562 [Pipeline] httpRequest 00:00:08.952 [Pipeline] echo 00:00:08.954 Sorcerer 10.211.164.101 is alive 00:00:08.963 [Pipeline] retry 00:00:08.965 [Pipeline] { 00:00:08.978 [Pipeline] httpRequest 00:00:08.982 HttpMethod: GET 00:00:08.982 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.984 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.995 Response Code: HTTP/1.1 200 OK 00:00:08.995 Success: Status code 200 is in the accepted range: 200,404 00:00:08.995 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.132 [Pipeline] } 00:00:11.151 [Pipeline] // retry 00:00:11.159 [Pipeline] sh 00:00:11.491 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.767 [Pipeline] httpRequest 00:00:12.190 [Pipeline] echo 00:00:12.191 Sorcerer 10.211.164.101 is alive 00:00:12.201 [Pipeline] retry 00:00:12.203 [Pipeline] { 00:00:12.220 [Pipeline] httpRequest 00:00:12.224 HttpMethod: GET 00:00:12.225 URL: http://10.211.164.101/packages/spdk_865972bb6a15c1fd334f34b6dad9d61ac8a1bcda.tar.gz 00:00:12.226 Sending request to url: http://10.211.164.101/packages/spdk_865972bb6a15c1fd334f34b6dad9d61ac8a1bcda.tar.gz 00:00:12.243 Response Code: HTTP/1.1 200 OK 00:00:12.244 Success: Status code 200 is in the accepted range: 200,404 00:00:12.244 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_865972bb6a15c1fd334f34b6dad9d61ac8a1bcda.tar.gz 00:00:38.071 [Pipeline] } 00:00:38.089 [Pipeline] // retry 00:00:38.097 [Pipeline] sh 00:00:38.381 + tar --no-same-owner -xf spdk_865972bb6a15c1fd334f34b6dad9d61ac8a1bcda.tar.gz 00:00:44.975 [Pipeline] sh 00:00:45.264 + git -C spdk log --oneline -n5 00:00:45.264 865972bb6 nvme: create, manage fd_group for nvme poll group 00:00:45.264 ba5b39cb2 thread: Extended options for spdk_interrupt_register 00:00:45.264 52e9db722 util: allow a fd_group to manage all its fds 00:00:45.264 6082eddb0 util: fix total fds to wait for 00:00:45.264 8ce2f3c7d util: handle events for vfio fd type 00:00:45.276 [Pipeline] } 00:00:45.290 [Pipeline] // stage 00:00:45.299 [Pipeline] stage 00:00:45.302 [Pipeline] { (Prepare) 00:00:45.319 [Pipeline] writeFile 00:00:45.335 [Pipeline] sh 00:00:45.622 + logger -p user.info -t JENKINS-CI 00:00:45.635 [Pipeline] sh 00:00:45.920 + logger -p user.info -t JENKINS-CI 00:00:45.934 [Pipeline] sh 00:00:46.220 + cat autorun-spdk.conf 00:00:46.220 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.220 SPDK_TEST_NVMF=1 00:00:46.220 SPDK_TEST_NVME_CLI=1 00:00:46.220 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.220 SPDK_TEST_NVMF_NICS=e810 00:00:46.220 SPDK_TEST_VFIOUSER=1 00:00:46.220 SPDK_RUN_UBSAN=1 00:00:46.220 NET_TYPE=phy 00:00:46.228 RUN_NIGHTLY=0 00:00:46.233 [Pipeline] readFile 00:00:46.258 [Pipeline] withEnv 00:00:46.261 [Pipeline] { 00:00:46.274 [Pipeline] sh 00:00:46.563 + set -ex 00:00:46.563 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:46.563 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:46.563 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.563 ++ SPDK_TEST_NVMF=1 00:00:46.563 ++ SPDK_TEST_NVME_CLI=1 00:00:46.563 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.563 ++ SPDK_TEST_NVMF_NICS=e810 00:00:46.563 ++ SPDK_TEST_VFIOUSER=1 00:00:46.563 ++ SPDK_RUN_UBSAN=1 00:00:46.563 ++ NET_TYPE=phy 00:00:46.563 ++ RUN_NIGHTLY=0 00:00:46.563 + case $SPDK_TEST_NVMF_NICS in 00:00:46.563 + DRIVERS=ice 00:00:46.563 + [[ tcp == \r\d\m\a ]] 00:00:46.563 + [[ -n ice ]] 00:00:46.563 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:46.563 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:46.563 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:46.563 rmmod: ERROR: Module irdma is not currently loaded 00:00:46.563 rmmod: ERROR: Module i40iw is not currently loaded 00:00:46.563 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:46.563 + true 00:00:46.563 + for D in $DRIVERS 00:00:46.563 + sudo modprobe ice 00:00:46.563 + exit 0 00:00:46.573 [Pipeline] } 00:00:46.588 [Pipeline] // withEnv 00:00:46.593 [Pipeline] } 00:00:46.609 [Pipeline] // stage 00:00:46.619 [Pipeline] catchError 00:00:46.620 [Pipeline] { 00:00:46.634 [Pipeline] timeout 00:00:46.635 Timeout set to expire in 1 hr 0 min 00:00:46.637 [Pipeline] { 00:00:46.650 [Pipeline] stage 00:00:46.652 [Pipeline] { (Tests) 00:00:46.665 [Pipeline] sh 00:00:46.952 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.952 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.952 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.952 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:46.952 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.952 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:46.952 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:46.952 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:46.952 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:46.952 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:46.952 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:46.952 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.952 + source /etc/os-release 00:00:46.952 ++ NAME='Fedora Linux' 00:00:46.952 ++ VERSION='39 (Cloud Edition)' 00:00:46.952 ++ ID=fedora 00:00:46.952 ++ VERSION_ID=39 00:00:46.952 ++ VERSION_CODENAME= 00:00:46.952 ++ PLATFORM_ID=platform:f39 00:00:46.952 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:46.952 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:46.952 ++ LOGO=fedora-logo-icon 00:00:46.952 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:46.952 ++ HOME_URL=https://fedoraproject.org/ 00:00:46.952 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:46.952 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:46.952 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:46.952 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:46.952 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:46.952 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:46.952 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:46.952 ++ SUPPORT_END=2024-11-12 00:00:46.952 ++ VARIANT='Cloud Edition' 00:00:46.952 ++ VARIANT_ID=cloud 00:00:46.952 + uname -a 00:00:46.952 Linux spdk-gp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:46.952 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:48.329 Hugepages 00:00:48.329 node hugesize free / total 00:00:48.329 node0 1048576kB 0 / 0 00:00:48.329 node0 2048kB 0 / 0 00:00:48.329 node1 1048576kB 0 / 0 00:00:48.329 node1 2048kB 0 / 0 00:00:48.329 00:00:48.329 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:48.329 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:48.329 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:48.329 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:48.329 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:48.329 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:48.329 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:48.329 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:48.329 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:48.329 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:48.329 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:48.329 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:48.329 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:48.329 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:48.329 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:48.329 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:48.329 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:48.589 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:48.589 + rm -f /tmp/spdk-ld-path 00:00:48.589 + source autorun-spdk.conf 00:00:48.589 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.589 ++ SPDK_TEST_NVMF=1 00:00:48.589 ++ SPDK_TEST_NVME_CLI=1 00:00:48.589 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.589 ++ SPDK_TEST_NVMF_NICS=e810 00:00:48.589 ++ SPDK_TEST_VFIOUSER=1 00:00:48.589 ++ SPDK_RUN_UBSAN=1 00:00:48.589 ++ NET_TYPE=phy 00:00:48.589 ++ RUN_NIGHTLY=0 00:00:48.589 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:48.589 + [[ -n '' ]] 00:00:48.589 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:48.589 + for M in /var/spdk/build-*-manifest.txt 00:00:48.589 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:48.589 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.589 + for M in /var/spdk/build-*-manifest.txt 00:00:48.589 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:48.589 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.589 + for M in /var/spdk/build-*-manifest.txt 00:00:48.589 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:48.589 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.589 ++ uname 00:00:48.589 + [[ Linux == \L\i\n\u\x ]] 00:00:48.589 + sudo dmesg -T 00:00:48.589 + sudo dmesg --clear 00:00:48.589 + dmesg_pid=979723 00:00:48.589 + [[ Fedora Linux == FreeBSD ]] 00:00:48.589 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.589 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.589 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:48.589 + sudo dmesg -Tw 00:00:48.589 + [[ -x /usr/src/fio-static/fio ]] 00:00:48.589 + export FIO_BIN=/usr/src/fio-static/fio 00:00:48.589 + FIO_BIN=/usr/src/fio-static/fio 00:00:48.589 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:48.589 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:48.589 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:48.589 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.589 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.589 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:48.589 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.589 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.589 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:48.589 Test configuration: 00:00:48.589 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.589 SPDK_TEST_NVMF=1 00:00:48.589 SPDK_TEST_NVME_CLI=1 00:00:48.589 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.589 SPDK_TEST_NVMF_NICS=e810 00:00:48.589 SPDK_TEST_VFIOUSER=1 00:00:48.589 SPDK_RUN_UBSAN=1 00:00:48.589 NET_TYPE=phy 00:00:48.589 RUN_NIGHTLY=0 18:11:17 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:00:48.589 18:11:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:48.589 18:11:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:48.589 18:11:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:48.589 18:11:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:48.589 18:11:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:48.589 18:11:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.589 18:11:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.589 18:11:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.589 18:11:17 -- paths/export.sh@5 -- $ export PATH 00:00:48.589 18:11:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.589 18:11:17 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:48.589 18:11:17 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:48.589 18:11:17 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728403877.XXXXXX 00:00:48.589 18:11:17 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728403877.auKwKM 00:00:48.589 18:11:17 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:48.589 18:11:17 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:48.589 18:11:17 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:48.589 18:11:17 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:48.589 18:11:17 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:48.589 18:11:17 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:48.590 18:11:17 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:00:48.590 18:11:17 -- common/autotest_common.sh@10 -- $ set +x 00:00:48.590 18:11:17 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:48.590 18:11:17 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:48.590 18:11:17 -- pm/common@17 -- $ local monitor 00:00:48.590 18:11:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.590 18:11:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.590 18:11:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.590 18:11:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.590 18:11:17 -- pm/common@21 -- $ date +%s 00:00:48.590 18:11:17 -- pm/common@25 -- $ sleep 1 00:00:48.590 18:11:17 -- pm/common@21 -- $ date +%s 00:00:48.590 18:11:17 -- pm/common@21 -- $ date +%s 00:00:48.590 18:11:17 -- pm/common@21 -- $ date +%s 00:00:48.590 18:11:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403877 00:00:48.590 18:11:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403877 00:00:48.590 18:11:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403877 00:00:48.590 18:11:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403877 00:00:48.849 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403877_collect-cpu-temp.pm.log 00:00:48.849 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403877_collect-vmstat.pm.log 00:00:48.849 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403877_collect-cpu-load.pm.log 00:00:48.849 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403877_collect-bmc-pm.bmc.pm.log 00:00:49.787 18:11:18 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:00:49.787 18:11:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:49.787 18:11:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:49.787 18:11:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.787 18:11:18 -- spdk/autobuild.sh@16 -- $ date -u 00:00:49.787 Tue Oct 8 04:11:18 PM UTC 2024 00:00:49.787 18:11:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:49.787 v25.01-pre-52-g865972bb6 00:00:49.787 18:11:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:49.787 18:11:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:49.787 18:11:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:49.787 18:11:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:49.787 18:11:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:49.787 18:11:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.787 ************************************ 00:00:49.787 START TEST ubsan 00:00:49.787 ************************************ 00:00:49.787 18:11:18 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:49.787 using ubsan 00:00:49.787 00:00:49.787 real 0m0.000s 00:00:49.787 user 0m0.000s 00:00:49.787 sys 0m0.000s 00:00:49.787 18:11:18 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:49.787 18:11:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:49.787 ************************************ 00:00:49.787 END TEST ubsan 00:00:49.787 ************************************ 00:00:49.787 18:11:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:49.787 18:11:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:49.787 18:11:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:49.787 18:11:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:49.788 18:11:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:49.788 18:11:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:49.788 18:11:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:49.788 18:11:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:49.788 18:11:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:49.788 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:49.788 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:50.356 Using 'verbs' RDMA provider 00:01:09.397 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:24.292 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:24.292 Creating mk/config.mk...done. 00:01:24.292 Creating mk/cc.flags.mk...done. 00:01:24.292 Type 'make' to build. 00:01:24.292 18:11:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:24.292 18:11:51 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:24.292 18:11:51 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:24.292 18:11:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.292 ************************************ 00:01:24.292 START TEST make 00:01:24.292 ************************************ 00:01:24.292 18:11:51 make -- common/autotest_common.sh@1125 -- $ make -j48 00:01:24.292 make[1]: Nothing to be done for 'all'. 00:01:25.314 The Meson build system 00:01:25.314 Version: 1.5.0 00:01:25.314 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:25.314 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.314 Build type: native build 00:01:25.314 Project name: libvfio-user 00:01:25.314 Project version: 0.0.1 00:01:25.314 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:25.314 C linker for the host machine: cc ld.bfd 2.40-14 00:01:25.314 Host machine cpu family: x86_64 00:01:25.314 Host machine cpu: x86_64 00:01:25.314 Run-time dependency threads found: YES 00:01:25.314 Library dl found: YES 00:01:25.314 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:25.314 Run-time dependency json-c found: YES 0.17 00:01:25.314 Run-time dependency cmocka found: YES 1.1.7 00:01:25.314 Program pytest-3 found: NO 00:01:25.314 Program flake8 found: NO 00:01:25.314 Program misspell-fixer found: NO 00:01:25.314 Program restructuredtext-lint found: NO 00:01:25.314 Program valgrind found: YES (/usr/bin/valgrind) 00:01:25.314 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:25.314 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:25.314 Compiler for C supports arguments -Wwrite-strings: YES 00:01:25.314 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:25.314 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:25.314 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:25.314 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:25.314 Build targets in project: 8 00:01:25.314 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:25.314 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:25.314 00:01:25.314 libvfio-user 0.0.1 00:01:25.314 00:01:25.314 User defined options 00:01:25.314 buildtype : debug 00:01:25.314 default_library: shared 00:01:25.314 libdir : /usr/local/lib 00:01:25.314 00:01:25.314 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:26.268 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:26.528 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:26.528 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:26.528 [3/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:26.528 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:26.528 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:26.528 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:26.528 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:26.528 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:26.528 [9/37] Compiling C object samples/null.p/null.c.o 00:01:26.528 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:26.528 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:26.528 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:26.528 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:26.528 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:26.791 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:26.791 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:26.791 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:26.791 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:26.791 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:26.791 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:26.791 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:26.791 [22/37] Compiling C object samples/server.p/server.c.o 00:01:26.791 [23/37] Compiling C object samples/client.p/client.c.o 00:01:26.791 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:26.791 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:26.791 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:26.791 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:26.791 [28/37] Linking target samples/client 00:01:26.791 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:26.791 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:26.791 [31/37] Linking target test/unit_tests 00:01:27.053 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:27.053 [33/37] Linking target samples/server 00:01:27.053 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:27.053 [35/37] Linking target samples/null 00:01:27.053 [36/37] Linking target samples/gpio-pci-idio-16 00:01:27.053 [37/37] Linking target samples/lspci 00:01:27.053 INFO: autodetecting backend as ninja 00:01:27.053 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.316 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:28.260 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:28.260 ninja: no work to do. 00:01:33.528 The Meson build system 00:01:33.528 Version: 1.5.0 00:01:33.528 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:33.528 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:33.528 Build type: native build 00:01:33.528 Program cat found: YES (/usr/bin/cat) 00:01:33.528 Project name: DPDK 00:01:33.528 Project version: 24.03.0 00:01:33.528 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:33.528 C linker for the host machine: cc ld.bfd 2.40-14 00:01:33.528 Host machine cpu family: x86_64 00:01:33.528 Host machine cpu: x86_64 00:01:33.528 Message: ## Building in Developer Mode ## 00:01:33.528 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:33.528 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:33.528 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:33.528 Program python3 found: YES (/usr/bin/python3) 00:01:33.528 Program cat found: YES (/usr/bin/cat) 00:01:33.528 Compiler for C supports arguments -march=native: YES 00:01:33.528 Checking for size of "void *" : 8 00:01:33.528 Checking for size of "void *" : 8 (cached) 00:01:33.528 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:33.528 Library m found: YES 00:01:33.528 Library numa found: YES 00:01:33.528 Has header "numaif.h" : YES 00:01:33.528 Library fdt found: NO 00:01:33.528 Library execinfo found: NO 00:01:33.528 Has header "execinfo.h" : YES 00:01:33.528 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:33.528 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:33.528 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:33.528 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:33.528 Run-time dependency openssl found: YES 3.1.1 00:01:33.528 Run-time dependency libpcap found: YES 1.10.4 00:01:33.528 Has header "pcap.h" with dependency libpcap: YES 00:01:33.528 Compiler for C supports arguments -Wcast-qual: YES 00:01:33.528 Compiler for C supports arguments -Wdeprecated: YES 00:01:33.528 Compiler for C supports arguments -Wformat: YES 00:01:33.528 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:33.528 Compiler for C supports arguments -Wformat-security: NO 00:01:33.528 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:33.528 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:33.528 Compiler for C supports arguments -Wnested-externs: YES 00:01:33.528 Compiler for C supports arguments -Wold-style-definition: YES 00:01:33.528 Compiler for C supports arguments -Wpointer-arith: YES 00:01:33.528 Compiler for C supports arguments -Wsign-compare: YES 00:01:33.528 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:33.528 Compiler for C supports arguments -Wundef: YES 00:01:33.528 Compiler for C supports arguments -Wwrite-strings: YES 00:01:33.528 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:33.528 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:33.528 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:33.528 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:33.528 Program objdump found: YES (/usr/bin/objdump) 00:01:33.528 Compiler for C supports arguments -mavx512f: YES 00:01:33.528 Checking if "AVX512 checking" compiles: YES 00:01:33.528 Fetching value of define "__SSE4_2__" : 1 00:01:33.528 Fetching value of define "__AES__" : 1 00:01:33.528 Fetching value of define "__AVX__" : 1 00:01:33.528 Fetching value of define "__AVX2__" : (undefined) 00:01:33.528 Fetching value of define "__AVX512BW__" : (undefined) 00:01:33.528 Fetching value of define "__AVX512CD__" : (undefined) 00:01:33.528 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:33.528 Fetching value of define "__AVX512F__" : (undefined) 00:01:33.528 Fetching value of define "__AVX512VL__" : (undefined) 00:01:33.528 Fetching value of define "__PCLMUL__" : 1 00:01:33.528 Fetching value of define "__RDRND__" : 1 00:01:33.528 Fetching value of define "__RDSEED__" : (undefined) 00:01:33.528 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:33.528 Fetching value of define "__znver1__" : (undefined) 00:01:33.528 Fetching value of define "__znver2__" : (undefined) 00:01:33.528 Fetching value of define "__znver3__" : (undefined) 00:01:33.528 Fetching value of define "__znver4__" : (undefined) 00:01:33.528 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:33.528 Message: lib/log: Defining dependency "log" 00:01:33.528 Message: lib/kvargs: Defining dependency "kvargs" 00:01:33.528 Message: lib/telemetry: Defining dependency "telemetry" 00:01:33.528 Checking for function "getentropy" : NO 00:01:33.528 Message: lib/eal: Defining dependency "eal" 00:01:33.528 Message: lib/ring: Defining dependency "ring" 00:01:33.528 Message: lib/rcu: Defining dependency "rcu" 00:01:33.528 Message: lib/mempool: Defining dependency "mempool" 00:01:33.528 Message: lib/mbuf: Defining dependency "mbuf" 00:01:33.528 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:33.528 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.528 Compiler for C supports arguments -mpclmul: YES 00:01:33.528 Compiler for C supports arguments -maes: YES 00:01:33.528 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:33.529 Compiler for C supports arguments -mavx512bw: YES 00:01:33.529 Compiler for C supports arguments -mavx512dq: YES 00:01:33.529 Compiler for C supports arguments -mavx512vl: YES 00:01:33.529 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:33.529 Compiler for C supports arguments -mavx2: YES 00:01:33.529 Compiler for C supports arguments -mavx: YES 00:01:33.529 Message: lib/net: Defining dependency "net" 00:01:33.529 Message: lib/meter: Defining dependency "meter" 00:01:33.529 Message: lib/ethdev: Defining dependency "ethdev" 00:01:33.529 Message: lib/pci: Defining dependency "pci" 00:01:33.529 Message: lib/cmdline: Defining dependency "cmdline" 00:01:33.529 Message: lib/hash: Defining dependency "hash" 00:01:33.529 Message: lib/timer: Defining dependency "timer" 00:01:33.529 Message: lib/compressdev: Defining dependency "compressdev" 00:01:33.529 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:33.529 Message: lib/dmadev: Defining dependency "dmadev" 00:01:33.529 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:33.529 Message: lib/power: Defining dependency "power" 00:01:33.529 Message: lib/reorder: Defining dependency "reorder" 00:01:33.529 Message: lib/security: Defining dependency "security" 00:01:33.529 Has header "linux/userfaultfd.h" : YES 00:01:33.529 Has header "linux/vduse.h" : YES 00:01:33.529 Message: lib/vhost: Defining dependency "vhost" 00:01:33.529 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:33.529 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:33.529 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:33.529 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:33.529 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:33.529 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:33.529 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:33.529 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:33.529 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:33.529 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:33.529 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:33.529 Configuring doxy-api-html.conf using configuration 00:01:33.529 Configuring doxy-api-man.conf using configuration 00:01:33.529 Program mandb found: YES (/usr/bin/mandb) 00:01:33.529 Program sphinx-build found: NO 00:01:33.529 Configuring rte_build_config.h using configuration 00:01:33.529 Message: 00:01:33.529 ================= 00:01:33.529 Applications Enabled 00:01:33.529 ================= 00:01:33.529 00:01:33.529 apps: 00:01:33.529 00:01:33.529 00:01:33.529 Message: 00:01:33.529 ================= 00:01:33.529 Libraries Enabled 00:01:33.529 ================= 00:01:33.529 00:01:33.529 libs: 00:01:33.529 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:33.529 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:33.529 cryptodev, dmadev, power, reorder, security, vhost, 00:01:33.529 00:01:33.529 Message: 00:01:33.529 =============== 00:01:33.529 Drivers Enabled 00:01:33.529 =============== 00:01:33.529 00:01:33.529 common: 00:01:33.529 00:01:33.529 bus: 00:01:33.529 pci, vdev, 00:01:33.529 mempool: 00:01:33.529 ring, 00:01:33.529 dma: 00:01:33.529 00:01:33.529 net: 00:01:33.529 00:01:33.529 crypto: 00:01:33.529 00:01:33.529 compress: 00:01:33.529 00:01:33.529 vdpa: 00:01:33.529 00:01:33.529 00:01:33.529 Message: 00:01:33.529 ================= 00:01:33.529 Content Skipped 00:01:33.529 ================= 00:01:33.529 00:01:33.529 apps: 00:01:33.529 dumpcap: explicitly disabled via build config 00:01:33.529 graph: explicitly disabled via build config 00:01:33.529 pdump: explicitly disabled via build config 00:01:33.529 proc-info: explicitly disabled via build config 00:01:33.529 test-acl: explicitly disabled via build config 00:01:33.529 test-bbdev: explicitly disabled via build config 00:01:33.529 test-cmdline: explicitly disabled via build config 00:01:33.529 test-compress-perf: explicitly disabled via build config 00:01:33.529 test-crypto-perf: explicitly disabled via build config 00:01:33.529 test-dma-perf: explicitly disabled via build config 00:01:33.529 test-eventdev: explicitly disabled via build config 00:01:33.529 test-fib: explicitly disabled via build config 00:01:33.529 test-flow-perf: explicitly disabled via build config 00:01:33.529 test-gpudev: explicitly disabled via build config 00:01:33.529 test-mldev: explicitly disabled via build config 00:01:33.529 test-pipeline: explicitly disabled via build config 00:01:33.529 test-pmd: explicitly disabled via build config 00:01:33.529 test-regex: explicitly disabled via build config 00:01:33.529 test-sad: explicitly disabled via build config 00:01:33.529 test-security-perf: explicitly disabled via build config 00:01:33.529 00:01:33.529 libs: 00:01:33.529 argparse: explicitly disabled via build config 00:01:33.529 metrics: explicitly disabled via build config 00:01:33.529 acl: explicitly disabled via build config 00:01:33.529 bbdev: explicitly disabled via build config 00:01:33.529 bitratestats: explicitly disabled via build config 00:01:33.529 bpf: explicitly disabled via build config 00:01:33.529 cfgfile: explicitly disabled via build config 00:01:33.529 distributor: explicitly disabled via build config 00:01:33.529 efd: explicitly disabled via build config 00:01:33.529 eventdev: explicitly disabled via build config 00:01:33.529 dispatcher: explicitly disabled via build config 00:01:33.529 gpudev: explicitly disabled via build config 00:01:33.529 gro: explicitly disabled via build config 00:01:33.529 gso: explicitly disabled via build config 00:01:33.529 ip_frag: explicitly disabled via build config 00:01:33.529 jobstats: explicitly disabled via build config 00:01:33.529 latencystats: explicitly disabled via build config 00:01:33.529 lpm: explicitly disabled via build config 00:01:33.529 member: explicitly disabled via build config 00:01:33.529 pcapng: explicitly disabled via build config 00:01:33.529 rawdev: explicitly disabled via build config 00:01:33.529 regexdev: explicitly disabled via build config 00:01:33.529 mldev: explicitly disabled via build config 00:01:33.529 rib: explicitly disabled via build config 00:01:33.529 sched: explicitly disabled via build config 00:01:33.529 stack: explicitly disabled via build config 00:01:33.529 ipsec: explicitly disabled via build config 00:01:33.529 pdcp: explicitly disabled via build config 00:01:33.529 fib: explicitly disabled via build config 00:01:33.529 port: explicitly disabled via build config 00:01:33.529 pdump: explicitly disabled via build config 00:01:33.529 table: explicitly disabled via build config 00:01:33.529 pipeline: explicitly disabled via build config 00:01:33.529 graph: explicitly disabled via build config 00:01:33.529 node: explicitly disabled via build config 00:01:33.529 00:01:33.529 drivers: 00:01:33.529 common/cpt: not in enabled drivers build config 00:01:33.529 common/dpaax: not in enabled drivers build config 00:01:33.529 common/iavf: not in enabled drivers build config 00:01:33.529 common/idpf: not in enabled drivers build config 00:01:33.529 common/ionic: not in enabled drivers build config 00:01:33.529 common/mvep: not in enabled drivers build config 00:01:33.529 common/octeontx: not in enabled drivers build config 00:01:33.529 bus/auxiliary: not in enabled drivers build config 00:01:33.529 bus/cdx: not in enabled drivers build config 00:01:33.529 bus/dpaa: not in enabled drivers build config 00:01:33.529 bus/fslmc: not in enabled drivers build config 00:01:33.529 bus/ifpga: not in enabled drivers build config 00:01:33.529 bus/platform: not in enabled drivers build config 00:01:33.529 bus/uacce: not in enabled drivers build config 00:01:33.529 bus/vmbus: not in enabled drivers build config 00:01:33.529 common/cnxk: not in enabled drivers build config 00:01:33.529 common/mlx5: not in enabled drivers build config 00:01:33.529 common/nfp: not in enabled drivers build config 00:01:33.529 common/nitrox: not in enabled drivers build config 00:01:33.529 common/qat: not in enabled drivers build config 00:01:33.529 common/sfc_efx: not in enabled drivers build config 00:01:33.529 mempool/bucket: not in enabled drivers build config 00:01:33.529 mempool/cnxk: not in enabled drivers build config 00:01:33.529 mempool/dpaa: not in enabled drivers build config 00:01:33.529 mempool/dpaa2: not in enabled drivers build config 00:01:33.529 mempool/octeontx: not in enabled drivers build config 00:01:33.529 mempool/stack: not in enabled drivers build config 00:01:33.529 dma/cnxk: not in enabled drivers build config 00:01:33.529 dma/dpaa: not in enabled drivers build config 00:01:33.529 dma/dpaa2: not in enabled drivers build config 00:01:33.529 dma/hisilicon: not in enabled drivers build config 00:01:33.529 dma/idxd: not in enabled drivers build config 00:01:33.529 dma/ioat: not in enabled drivers build config 00:01:33.529 dma/skeleton: not in enabled drivers build config 00:01:33.529 net/af_packet: not in enabled drivers build config 00:01:33.529 net/af_xdp: not in enabled drivers build config 00:01:33.529 net/ark: not in enabled drivers build config 00:01:33.529 net/atlantic: not in enabled drivers build config 00:01:33.529 net/avp: not in enabled drivers build config 00:01:33.529 net/axgbe: not in enabled drivers build config 00:01:33.529 net/bnx2x: not in enabled drivers build config 00:01:33.529 net/bnxt: not in enabled drivers build config 00:01:33.529 net/bonding: not in enabled drivers build config 00:01:33.529 net/cnxk: not in enabled drivers build config 00:01:33.529 net/cpfl: not in enabled drivers build config 00:01:33.529 net/cxgbe: not in enabled drivers build config 00:01:33.529 net/dpaa: not in enabled drivers build config 00:01:33.529 net/dpaa2: not in enabled drivers build config 00:01:33.529 net/e1000: not in enabled drivers build config 00:01:33.529 net/ena: not in enabled drivers build config 00:01:33.529 net/enetc: not in enabled drivers build config 00:01:33.529 net/enetfec: not in enabled drivers build config 00:01:33.529 net/enic: not in enabled drivers build config 00:01:33.529 net/failsafe: not in enabled drivers build config 00:01:33.529 net/fm10k: not in enabled drivers build config 00:01:33.529 net/gve: not in enabled drivers build config 00:01:33.529 net/hinic: not in enabled drivers build config 00:01:33.529 net/hns3: not in enabled drivers build config 00:01:33.529 net/i40e: not in enabled drivers build config 00:01:33.529 net/iavf: not in enabled drivers build config 00:01:33.529 net/ice: not in enabled drivers build config 00:01:33.529 net/idpf: not in enabled drivers build config 00:01:33.529 net/igc: not in enabled drivers build config 00:01:33.529 net/ionic: not in enabled drivers build config 00:01:33.529 net/ipn3ke: not in enabled drivers build config 00:01:33.529 net/ixgbe: not in enabled drivers build config 00:01:33.529 net/mana: not in enabled drivers build config 00:01:33.530 net/memif: not in enabled drivers build config 00:01:33.530 net/mlx4: not in enabled drivers build config 00:01:33.530 net/mlx5: not in enabled drivers build config 00:01:33.530 net/mvneta: not in enabled drivers build config 00:01:33.530 net/mvpp2: not in enabled drivers build config 00:01:33.530 net/netvsc: not in enabled drivers build config 00:01:33.530 net/nfb: not in enabled drivers build config 00:01:33.530 net/nfp: not in enabled drivers build config 00:01:33.530 net/ngbe: not in enabled drivers build config 00:01:33.530 net/null: not in enabled drivers build config 00:01:33.530 net/octeontx: not in enabled drivers build config 00:01:33.530 net/octeon_ep: not in enabled drivers build config 00:01:33.530 net/pcap: not in enabled drivers build config 00:01:33.530 net/pfe: not in enabled drivers build config 00:01:33.530 net/qede: not in enabled drivers build config 00:01:33.530 net/ring: not in enabled drivers build config 00:01:33.530 net/sfc: not in enabled drivers build config 00:01:33.530 net/softnic: not in enabled drivers build config 00:01:33.530 net/tap: not in enabled drivers build config 00:01:33.530 net/thunderx: not in enabled drivers build config 00:01:33.530 net/txgbe: not in enabled drivers build config 00:01:33.530 net/vdev_netvsc: not in enabled drivers build config 00:01:33.530 net/vhost: not in enabled drivers build config 00:01:33.530 net/virtio: not in enabled drivers build config 00:01:33.530 net/vmxnet3: not in enabled drivers build config 00:01:33.530 raw/*: missing internal dependency, "rawdev" 00:01:33.530 crypto/armv8: not in enabled drivers build config 00:01:33.530 crypto/bcmfs: not in enabled drivers build config 00:01:33.530 crypto/caam_jr: not in enabled drivers build config 00:01:33.530 crypto/ccp: not in enabled drivers build config 00:01:33.530 crypto/cnxk: not in enabled drivers build config 00:01:33.530 crypto/dpaa_sec: not in enabled drivers build config 00:01:33.530 crypto/dpaa2_sec: not in enabled drivers build config 00:01:33.530 crypto/ipsec_mb: not in enabled drivers build config 00:01:33.530 crypto/mlx5: not in enabled drivers build config 00:01:33.530 crypto/mvsam: not in enabled drivers build config 00:01:33.530 crypto/nitrox: not in enabled drivers build config 00:01:33.530 crypto/null: not in enabled drivers build config 00:01:33.530 crypto/octeontx: not in enabled drivers build config 00:01:33.530 crypto/openssl: not in enabled drivers build config 00:01:33.530 crypto/scheduler: not in enabled drivers build config 00:01:33.530 crypto/uadk: not in enabled drivers build config 00:01:33.530 crypto/virtio: not in enabled drivers build config 00:01:33.530 compress/isal: not in enabled drivers build config 00:01:33.530 compress/mlx5: not in enabled drivers build config 00:01:33.530 compress/nitrox: not in enabled drivers build config 00:01:33.530 compress/octeontx: not in enabled drivers build config 00:01:33.530 compress/zlib: not in enabled drivers build config 00:01:33.530 regex/*: missing internal dependency, "regexdev" 00:01:33.530 ml/*: missing internal dependency, "mldev" 00:01:33.530 vdpa/ifc: not in enabled drivers build config 00:01:33.530 vdpa/mlx5: not in enabled drivers build config 00:01:33.530 vdpa/nfp: not in enabled drivers build config 00:01:33.530 vdpa/sfc: not in enabled drivers build config 00:01:33.530 event/*: missing internal dependency, "eventdev" 00:01:33.530 baseband/*: missing internal dependency, "bbdev" 00:01:33.530 gpu/*: missing internal dependency, "gpudev" 00:01:33.530 00:01:33.530 00:01:33.788 Build targets in project: 85 00:01:33.788 00:01:33.788 DPDK 24.03.0 00:01:33.788 00:01:33.788 User defined options 00:01:33.788 buildtype : debug 00:01:33.788 default_library : shared 00:01:33.788 libdir : lib 00:01:33.788 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:33.788 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:33.788 c_link_args : 00:01:33.788 cpu_instruction_set: native 00:01:33.788 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:33.788 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:33.788 enable_docs : false 00:01:33.788 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:33.788 enable_kmods : false 00:01:33.788 max_lcores : 128 00:01:33.788 tests : false 00:01:33.788 00:01:33.788 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:34.362 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:34.362 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:34.362 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:34.362 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:34.363 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:34.363 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:34.363 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:34.625 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:34.625 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:34.625 [9/268] Linking static target lib/librte_kvargs.a 00:01:34.625 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:34.625 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:34.625 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:34.625 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:34.625 [14/268] Linking static target lib/librte_log.a 00:01:34.625 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:34.625 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:35.195 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.195 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:35.458 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:35.458 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:35.458 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:35.458 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:35.458 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:35.458 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:35.458 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:35.458 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:35.458 [27/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:35.458 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:35.458 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:35.458 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:35.458 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:35.458 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:35.458 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:35.458 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:35.458 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:35.458 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:35.458 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:35.458 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:35.458 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:35.458 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:35.458 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:35.458 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:35.458 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:35.458 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:35.458 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:35.458 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:35.458 [47/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:35.458 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:35.458 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:35.458 [50/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:35.458 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:35.458 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:35.458 [53/268] Linking static target lib/librte_telemetry.a 00:01:35.458 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:35.458 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:35.458 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:35.718 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:35.718 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:35.718 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:35.718 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:35.718 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:35.718 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:35.718 [63/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.718 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:35.718 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:35.718 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:35.718 [67/268] Linking target lib/librte_log.so.24.1 00:01:35.979 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:35.979 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:35.979 [70/268] Linking static target lib/librte_pci.a 00:01:36.241 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:36.241 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:36.241 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:36.241 [74/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:36.241 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:36.241 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:36.241 [77/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:36.241 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:36.241 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:36.241 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:36.241 [81/268] Linking target lib/librte_kvargs.so.24.1 00:01:36.241 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:36.502 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:36.502 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:36.502 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:36.502 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:36.502 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:36.502 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:36.502 [89/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:36.502 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:36.502 [91/268] Linking static target lib/librte_meter.a 00:01:36.502 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:36.502 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:36.502 [94/268] Linking static target lib/librte_ring.a 00:01:36.502 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:36.502 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:36.502 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:36.502 [98/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:36.502 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:36.502 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:36.502 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:36.502 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:36.502 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:36.502 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:36.502 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:36.502 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:36.502 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:36.502 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:36.502 [109/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.502 [110/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:36.502 [111/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:36.765 [112/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:36.765 [113/268] Linking static target lib/librte_eal.a 00:01:36.765 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:36.765 [115/268] Linking static target lib/librte_rcu.a 00:01:36.765 [116/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.765 [117/268] Linking static target lib/librte_mempool.a 00:01:36.765 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:36.765 [119/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:36.765 [120/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:36.765 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:36.765 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:36.765 [123/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:36.765 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:36.765 [125/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:36.765 [126/268] Linking target lib/librte_telemetry.so.24.1 00:01:36.765 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:36.765 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:36.765 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:36.765 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:36.765 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:37.036 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:37.036 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:37.036 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.036 [135/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:37.036 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:37.036 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.036 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:37.036 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:37.304 [140/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:37.304 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:37.304 [142/268] Linking static target lib/librte_net.a 00:01:37.304 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:37.304 [144/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.304 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:37.304 [146/268] Linking static target lib/librte_cmdline.a 00:01:37.304 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:37.304 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:37.304 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:37.563 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:37.564 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:37.564 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:37.564 [153/268] Linking static target lib/librte_timer.a 00:01:37.564 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:37.564 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:37.564 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:37.564 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:37.564 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:37.564 [159/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.564 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:37.564 [161/268] Linking static target lib/librte_dmadev.a 00:01:37.823 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:37.823 [163/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:37.823 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:37.823 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:37.823 [166/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.823 [167/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:37.823 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:37.823 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:37.823 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:37.823 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:37.823 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:38.082 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:38.082 [174/268] Linking static target lib/librte_compressdev.a 00:01:38.082 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.082 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:38.082 [177/268] Linking static target lib/librte_power.a 00:01:38.082 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:38.082 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:38.082 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:38.082 [181/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:38.082 [182/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:38.082 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:38.082 [184/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:38.082 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:38.082 [186/268] Linking static target lib/librte_hash.a 00:01:38.082 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:38.082 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:38.082 [189/268] Linking static target lib/librte_reorder.a 00:01:38.082 [190/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.082 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:38.082 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:38.341 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:38.341 [194/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.341 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:38.341 [196/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:38.341 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.341 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.341 [199/268] Linking static target drivers/librte_bus_vdev.a 00:01:38.341 [200/268] Linking static target lib/librte_mbuf.a 00:01:38.341 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:38.341 [202/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:38.341 [203/268] Linking static target lib/librte_security.a 00:01:38.341 [204/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:38.341 [205/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:38.341 [206/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.341 [207/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.341 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:38.341 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.341 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.599 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:38.599 [212/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.599 [213/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.599 [214/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:38.599 [215/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.599 [216/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.599 [217/268] Linking static target drivers/librte_mempool_ring.a 00:01:38.599 [218/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.858 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.858 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:38.858 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:38.858 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.858 [223/268] Linking static target lib/librte_ethdev.a 00:01:38.858 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.117 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:39.117 [226/268] Linking static target lib/librte_cryptodev.a 00:01:40.487 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.421 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:43.321 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.321 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.321 [231/268] Linking target lib/librte_eal.so.24.1 00:01:43.321 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:43.321 [233/268] Linking target lib/librte_ring.so.24.1 00:01:43.321 [234/268] Linking target lib/librte_meter.so.24.1 00:01:43.321 [235/268] Linking target lib/librte_timer.so.24.1 00:01:43.321 [236/268] Linking target lib/librte_pci.so.24.1 00:01:43.321 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:43.321 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:43.578 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:43.578 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:43.578 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:43.578 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:43.578 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:43.578 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:43.578 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:43.578 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:43.578 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:43.578 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:43.839 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:43.839 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:43.839 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:43.839 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:43.839 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:43.839 [254/268] Linking target lib/librte_net.so.24.1 00:01:43.839 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:44.096 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:44.096 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:44.096 [258/268] Linking target lib/librte_cmdline.so.24.1 00:01:44.096 [259/268] Linking target lib/librte_security.so.24.1 00:01:44.096 [260/268] Linking target lib/librte_hash.so.24.1 00:01:44.096 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:44.096 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:44.354 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:44.354 [264/268] Linking target lib/librte_power.so.24.1 00:01:47.636 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:47.636 [266/268] Linking static target lib/librte_vhost.a 00:01:48.201 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.459 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:48.459 INFO: autodetecting backend as ninja 00:01:48.459 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:14.994 CC lib/ut_mock/mock.o 00:02:14.994 CC lib/log/log.o 00:02:14.994 CC lib/log/log_flags.o 00:02:14.994 CC lib/log/log_deprecated.o 00:02:14.994 CC lib/ut/ut.o 00:02:14.994 LIB libspdk_ut.a 00:02:14.994 SO libspdk_ut.so.2.0 00:02:14.994 LIB libspdk_ut_mock.a 00:02:14.994 LIB libspdk_log.a 00:02:14.994 SO libspdk_ut_mock.so.6.0 00:02:14.994 SYMLINK libspdk_ut.so 00:02:14.994 SO libspdk_log.so.7.0 00:02:14.994 SYMLINK libspdk_ut_mock.so 00:02:14.994 SYMLINK libspdk_log.so 00:02:14.994 CC lib/ioat/ioat.o 00:02:14.994 CC lib/dma/dma.o 00:02:14.994 CXX lib/trace_parser/trace.o 00:02:14.994 CC lib/util/base64.o 00:02:14.994 CC lib/util/bit_array.o 00:02:14.994 CC lib/util/cpuset.o 00:02:14.994 CC lib/util/crc16.o 00:02:14.994 CC lib/util/crc32.o 00:02:14.994 CC lib/util/crc32c.o 00:02:14.994 CC lib/util/crc32_ieee.o 00:02:14.994 CC lib/util/crc64.o 00:02:14.994 CC lib/util/dif.o 00:02:14.994 CC lib/util/fd.o 00:02:14.994 CC lib/util/fd_group.o 00:02:14.994 CC lib/util/file.o 00:02:14.994 CC lib/util/hexlify.o 00:02:14.994 CC lib/util/iov.o 00:02:14.994 CC lib/util/math.o 00:02:14.994 CC lib/util/net.o 00:02:14.994 CC lib/util/pipe.o 00:02:14.994 CC lib/util/strerror_tls.o 00:02:14.994 CC lib/util/string.o 00:02:14.994 CC lib/util/uuid.o 00:02:14.994 CC lib/util/xor.o 00:02:14.994 CC lib/util/zipf.o 00:02:14.994 CC lib/util/md5.o 00:02:14.994 CC lib/vfio_user/host/vfio_user_pci.o 00:02:14.994 CC lib/vfio_user/host/vfio_user.o 00:02:14.994 LIB libspdk_dma.a 00:02:14.994 SO libspdk_dma.so.5.0 00:02:14.994 LIB libspdk_ioat.a 00:02:14.994 SO libspdk_ioat.so.7.0 00:02:14.994 SYMLINK libspdk_dma.so 00:02:14.994 SYMLINK libspdk_ioat.so 00:02:15.251 LIB libspdk_vfio_user.a 00:02:15.251 SO libspdk_vfio_user.so.5.0 00:02:15.251 SYMLINK libspdk_vfio_user.so 00:02:15.509 LIB libspdk_util.a 00:02:15.509 SO libspdk_util.so.10.1 00:02:15.766 LIB libspdk_trace_parser.a 00:02:15.766 SO libspdk_trace_parser.so.6.0 00:02:15.767 SYMLINK libspdk_util.so 00:02:15.767 SYMLINK libspdk_trace_parser.so 00:02:16.025 CC lib/idxd/idxd.o 00:02:16.025 CC lib/idxd/idxd_user.o 00:02:16.025 CC lib/idxd/idxd_kernel.o 00:02:16.025 CC lib/rdma_utils/rdma_utils.o 00:02:16.025 CC lib/vmd/led.o 00:02:16.025 CC lib/vmd/vmd.o 00:02:16.025 CC lib/env_dpdk/env.o 00:02:16.025 CC lib/env_dpdk/memory.o 00:02:16.025 CC lib/env_dpdk/pci.o 00:02:16.025 CC lib/conf/conf.o 00:02:16.025 CC lib/env_dpdk/init.o 00:02:16.025 CC lib/env_dpdk/threads.o 00:02:16.025 CC lib/env_dpdk/pci_ioat.o 00:02:16.025 CC lib/env_dpdk/pci_virtio.o 00:02:16.025 CC lib/env_dpdk/pci_vmd.o 00:02:16.025 CC lib/env_dpdk/pci_idxd.o 00:02:16.025 CC lib/env_dpdk/pci_event.o 00:02:16.025 CC lib/json/json_parse.o 00:02:16.025 CC lib/env_dpdk/sigbus_handler.o 00:02:16.025 CC lib/json/json_util.o 00:02:16.025 CC lib/env_dpdk/pci_dpdk.o 00:02:16.025 CC lib/json/json_write.o 00:02:16.025 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:16.025 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:16.025 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:16.025 CC lib/rdma_provider/common.o 00:02:16.346 LIB libspdk_conf.a 00:02:16.346 SO libspdk_conf.so.6.0 00:02:16.346 LIB libspdk_rdma_provider.a 00:02:16.346 SO libspdk_rdma_provider.so.6.0 00:02:16.346 LIB libspdk_rdma_utils.a 00:02:16.346 LIB libspdk_json.a 00:02:16.346 SYMLINK libspdk_conf.so 00:02:16.346 SO libspdk_rdma_utils.so.1.0 00:02:16.346 SO libspdk_json.so.6.0 00:02:16.346 SYMLINK libspdk_rdma_provider.so 00:02:16.346 SYMLINK libspdk_rdma_utils.so 00:02:16.606 SYMLINK libspdk_json.so 00:02:16.606 LIB libspdk_idxd.a 00:02:16.606 SO libspdk_idxd.so.12.1 00:02:16.606 SYMLINK libspdk_idxd.so 00:02:16.606 CC lib/jsonrpc/jsonrpc_server.o 00:02:16.606 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:16.606 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:16.606 CC lib/jsonrpc/jsonrpc_client.o 00:02:16.865 LIB libspdk_vmd.a 00:02:16.865 SO libspdk_vmd.so.6.0 00:02:17.125 LIB libspdk_jsonrpc.a 00:02:17.125 SYMLINK libspdk_vmd.so 00:02:17.125 SO libspdk_jsonrpc.so.6.0 00:02:17.125 SYMLINK libspdk_jsonrpc.so 00:02:17.385 CC lib/rpc/rpc.o 00:02:17.953 LIB libspdk_rpc.a 00:02:17.953 SO libspdk_rpc.so.6.0 00:02:17.953 LIB libspdk_env_dpdk.a 00:02:17.953 SYMLINK libspdk_rpc.so 00:02:18.211 SO libspdk_env_dpdk.so.15.1 00:02:18.211 CC lib/keyring/keyring.o 00:02:18.211 CC lib/keyring/keyring_rpc.o 00:02:18.211 CC lib/notify/notify.o 00:02:18.211 CC lib/trace/trace_flags.o 00:02:18.211 CC lib/trace/trace_rpc.o 00:02:18.211 CC lib/trace/trace.o 00:02:18.211 CC lib/notify/notify_rpc.o 00:02:18.211 SYMLINK libspdk_env_dpdk.so 00:02:18.471 LIB libspdk_notify.a 00:02:18.471 SO libspdk_notify.so.6.0 00:02:18.471 LIB libspdk_keyring.a 00:02:18.732 SO libspdk_keyring.so.2.0 00:02:18.732 LIB libspdk_trace.a 00:02:18.732 SYMLINK libspdk_notify.so 00:02:18.732 SO libspdk_trace.so.11.0 00:02:18.732 SYMLINK libspdk_trace.so 00:02:18.732 SYMLINK libspdk_keyring.so 00:02:18.992 CC lib/sock/sock_rpc.o 00:02:18.992 CC lib/sock/sock.o 00:02:18.992 CC lib/thread/iobuf.o 00:02:18.992 CC lib/thread/thread.o 00:02:19.253 LIB libspdk_sock.a 00:02:19.512 SO libspdk_sock.so.10.0 00:02:19.512 SYMLINK libspdk_sock.so 00:02:19.770 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:19.770 CC lib/nvme/nvme_ctrlr.o 00:02:19.770 CC lib/nvme/nvme_fabric.o 00:02:19.770 CC lib/nvme/nvme_ns_cmd.o 00:02:19.770 CC lib/nvme/nvme_ns.o 00:02:19.770 CC lib/nvme/nvme_pcie_common.o 00:02:19.770 CC lib/nvme/nvme_pcie.o 00:02:19.770 CC lib/nvme/nvme_qpair.o 00:02:19.770 CC lib/nvme/nvme.o 00:02:19.770 CC lib/nvme/nvme_quirks.o 00:02:19.770 CC lib/nvme/nvme_transport.o 00:02:19.770 CC lib/nvme/nvme_discovery.o 00:02:19.770 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:19.770 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:19.770 CC lib/nvme/nvme_tcp.o 00:02:19.770 CC lib/nvme/nvme_opal.o 00:02:19.770 CC lib/nvme/nvme_io_msg.o 00:02:19.770 CC lib/nvme/nvme_poll_group.o 00:02:19.770 CC lib/nvme/nvme_zns.o 00:02:19.770 CC lib/nvme/nvme_stubs.o 00:02:19.770 CC lib/nvme/nvme_auth.o 00:02:19.770 CC lib/nvme/nvme_cuse.o 00:02:19.770 CC lib/nvme/nvme_vfio_user.o 00:02:19.770 CC lib/nvme/nvme_rdma.o 00:02:21.146 LIB libspdk_thread.a 00:02:21.146 SO libspdk_thread.so.10.2 00:02:21.146 SYMLINK libspdk_thread.so 00:02:21.404 CC lib/fsdev/fsdev.o 00:02:21.404 CC lib/fsdev/fsdev_io.o 00:02:21.404 CC lib/blob/blobstore.o 00:02:21.404 CC lib/fsdev/fsdev_rpc.o 00:02:21.404 CC lib/blob/request.o 00:02:21.404 CC lib/blob/zeroes.o 00:02:21.404 CC lib/blob/blob_bs_dev.o 00:02:21.404 CC lib/virtio/virtio.o 00:02:21.404 CC lib/virtio/virtio_vhost_user.o 00:02:21.404 CC lib/virtio/virtio_vfio_user.o 00:02:21.404 CC lib/init/json_config.o 00:02:21.404 CC lib/virtio/virtio_pci.o 00:02:21.404 CC lib/init/subsystem.o 00:02:21.404 CC lib/accel/accel.o 00:02:21.404 CC lib/init/subsystem_rpc.o 00:02:21.404 CC lib/vfu_tgt/tgt_endpoint.o 00:02:21.404 CC lib/accel/accel_rpc.o 00:02:21.404 CC lib/init/rpc.o 00:02:21.404 CC lib/accel/accel_sw.o 00:02:21.404 CC lib/vfu_tgt/tgt_rpc.o 00:02:21.663 LIB libspdk_init.a 00:02:21.663 SO libspdk_init.so.6.0 00:02:21.663 LIB libspdk_vfu_tgt.a 00:02:21.663 SYMLINK libspdk_init.so 00:02:21.663 LIB libspdk_virtio.a 00:02:21.663 SO libspdk_vfu_tgt.so.3.0 00:02:21.663 SO libspdk_virtio.so.7.0 00:02:21.921 SYMLINK libspdk_vfu_tgt.so 00:02:21.921 SYMLINK libspdk_virtio.so 00:02:21.921 CC lib/event/app.o 00:02:21.921 CC lib/event/reactor.o 00:02:21.921 CC lib/event/log_rpc.o 00:02:21.921 CC lib/event/app_rpc.o 00:02:21.921 CC lib/event/scheduler_static.o 00:02:22.180 LIB libspdk_fsdev.a 00:02:22.180 SO libspdk_fsdev.so.1.0 00:02:22.180 SYMLINK libspdk_fsdev.so 00:02:22.439 LIB libspdk_event.a 00:02:22.439 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:22.439 SO libspdk_event.so.15.0 00:02:22.439 SYMLINK libspdk_event.so 00:02:22.697 LIB libspdk_accel.a 00:02:22.697 SO libspdk_accel.so.16.0 00:02:22.697 LIB libspdk_nvme.a 00:02:22.697 SYMLINK libspdk_accel.so 00:02:22.697 SO libspdk_nvme.so.15.0 00:02:22.955 CC lib/bdev/bdev.o 00:02:22.955 CC lib/bdev/bdev_rpc.o 00:02:22.955 CC lib/bdev/bdev_zone.o 00:02:22.955 CC lib/bdev/part.o 00:02:22.955 CC lib/bdev/scsi_nvme.o 00:02:22.955 SYMLINK libspdk_nvme.so 00:02:23.521 LIB libspdk_fuse_dispatcher.a 00:02:23.521 SO libspdk_fuse_dispatcher.so.1.0 00:02:23.521 SYMLINK libspdk_fuse_dispatcher.so 00:02:25.425 LIB libspdk_blob.a 00:02:25.425 SO libspdk_blob.so.11.0 00:02:25.684 SYMLINK libspdk_blob.so 00:02:25.684 CC lib/lvol/lvol.o 00:02:25.943 CC lib/blobfs/blobfs.o 00:02:25.943 CC lib/blobfs/tree.o 00:02:27.855 LIB libspdk_blobfs.a 00:02:27.855 SO libspdk_blobfs.so.10.0 00:02:27.855 SYMLINK libspdk_blobfs.so 00:02:28.114 LIB libspdk_bdev.a 00:02:28.114 SO libspdk_bdev.so.17.0 00:02:28.114 LIB libspdk_lvol.a 00:02:28.114 SO libspdk_lvol.so.10.0 00:02:28.379 SYMLINK libspdk_lvol.so 00:02:28.379 SYMLINK libspdk_bdev.so 00:02:28.379 CC lib/nvmf/ctrlr.o 00:02:28.379 CC lib/nvmf/ctrlr_discovery.o 00:02:28.379 CC lib/nvmf/ctrlr_bdev.o 00:02:28.379 CC lib/nvmf/subsystem.o 00:02:28.379 CC lib/nvmf/nvmf.o 00:02:28.379 CC lib/nvmf/nvmf_rpc.o 00:02:28.379 CC lib/nvmf/transport.o 00:02:28.379 CC lib/nbd/nbd.o 00:02:28.379 CC lib/ublk/ublk.o 00:02:28.379 CC lib/nvmf/tcp.o 00:02:28.379 CC lib/nbd/nbd_rpc.o 00:02:28.379 CC lib/ublk/ublk_rpc.o 00:02:28.379 CC lib/nvmf/stubs.o 00:02:28.379 CC lib/nvmf/mdns_server.o 00:02:28.379 CC lib/nvmf/vfio_user.o 00:02:28.379 CC lib/ftl/ftl_core.o 00:02:28.379 CC lib/nvmf/rdma.o 00:02:28.379 CC lib/ftl/ftl_init.o 00:02:28.379 CC lib/nvmf/auth.o 00:02:28.379 CC lib/ftl/ftl_layout.o 00:02:28.379 CC lib/ftl/ftl_debug.o 00:02:28.379 CC lib/ftl/ftl_io.o 00:02:28.379 CC lib/ftl/ftl_sb.o 00:02:28.379 CC lib/ftl/ftl_l2p.o 00:02:28.379 CC lib/ftl/ftl_l2p_flat.o 00:02:28.379 CC lib/ftl/ftl_nv_cache.o 00:02:28.379 CC lib/ftl/ftl_band.o 00:02:28.379 CC lib/scsi/dev.o 00:02:28.379 CC lib/ftl/ftl_band_ops.o 00:02:28.379 CC lib/ftl/ftl_writer.o 00:02:28.379 CC lib/scsi/lun.o 00:02:28.379 CC lib/ftl/ftl_rq.o 00:02:28.379 CC lib/scsi/port.o 00:02:28.379 CC lib/ftl/ftl_reloc.o 00:02:28.379 CC lib/scsi/scsi.o 00:02:28.379 CC lib/ftl/ftl_l2p_cache.o 00:02:28.379 CC lib/ftl/ftl_p2l.o 00:02:28.379 CC lib/scsi/scsi_bdev.o 00:02:28.379 CC lib/scsi/scsi_pr.o 00:02:28.379 CC lib/ftl/ftl_p2l_log.o 00:02:28.379 CC lib/scsi/scsi_rpc.o 00:02:28.379 CC lib/ftl/mngt/ftl_mngt.o 00:02:28.379 CC lib/scsi/task.o 00:02:28.379 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:28.379 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:28.379 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:28.379 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:28.379 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:28.958 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:28.958 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:28.958 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:28.958 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:28.958 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:28.958 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:28.958 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:28.958 CC lib/ftl/utils/ftl_conf.o 00:02:28.958 CC lib/ftl/utils/ftl_md.o 00:02:28.958 CC lib/ftl/utils/ftl_mempool.o 00:02:28.958 CC lib/ftl/utils/ftl_bitmap.o 00:02:28.958 CC lib/ftl/utils/ftl_property.o 00:02:28.958 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:28.958 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:28.958 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:28.958 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:28.958 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:28.958 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:28.958 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:29.216 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:29.216 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:29.216 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:29.216 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:29.216 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:29.216 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:29.216 CC lib/ftl/base/ftl_base_dev.o 00:02:29.216 CC lib/ftl/base/ftl_base_bdev.o 00:02:29.216 CC lib/ftl/ftl_trace.o 00:02:29.216 LIB libspdk_nbd.a 00:02:29.474 SO libspdk_nbd.so.7.0 00:02:29.474 LIB libspdk_scsi.a 00:02:29.474 SYMLINK libspdk_nbd.so 00:02:29.474 SO libspdk_scsi.so.9.0 00:02:29.474 LIB libspdk_ublk.a 00:02:29.474 SYMLINK libspdk_scsi.so 00:02:29.474 SO libspdk_ublk.so.3.0 00:02:29.733 SYMLINK libspdk_ublk.so 00:02:29.733 CC lib/iscsi/conn.o 00:02:29.733 CC lib/iscsi/init_grp.o 00:02:29.733 CC lib/vhost/vhost.o 00:02:29.733 CC lib/iscsi/iscsi.o 00:02:29.733 CC lib/vhost/vhost_rpc.o 00:02:29.733 CC lib/iscsi/param.o 00:02:29.733 CC lib/vhost/vhost_scsi.o 00:02:29.733 CC lib/vhost/vhost_blk.o 00:02:29.733 CC lib/iscsi/portal_grp.o 00:02:29.733 CC lib/vhost/rte_vhost_user.o 00:02:29.733 CC lib/iscsi/tgt_node.o 00:02:29.733 CC lib/iscsi/iscsi_subsystem.o 00:02:29.733 CC lib/iscsi/iscsi_rpc.o 00:02:29.733 CC lib/iscsi/task.o 00:02:29.990 LIB libspdk_ftl.a 00:02:29.990 SO libspdk_ftl.so.9.0 00:02:30.248 SYMLINK libspdk_ftl.so 00:02:31.630 LIB libspdk_iscsi.a 00:02:31.630 SO libspdk_iscsi.so.8.0 00:02:31.630 LIB libspdk_vhost.a 00:02:31.630 SYMLINK libspdk_iscsi.so 00:02:31.630 SO libspdk_vhost.so.8.0 00:02:31.630 SYMLINK libspdk_vhost.so 00:02:31.894 LIB libspdk_nvmf.a 00:02:31.894 SO libspdk_nvmf.so.19.0 00:02:32.154 SYMLINK libspdk_nvmf.so 00:02:32.413 CC module/vfu_device/vfu_virtio.o 00:02:32.413 CC module/vfu_device/vfu_virtio_blk.o 00:02:32.413 CC module/env_dpdk/env_dpdk_rpc.o 00:02:32.413 CC module/vfu_device/vfu_virtio_scsi.o 00:02:32.413 CC module/vfu_device/vfu_virtio_rpc.o 00:02:32.413 CC module/vfu_device/vfu_virtio_fs.o 00:02:32.413 CC module/sock/posix/posix.o 00:02:32.413 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:32.413 CC module/scheduler/gscheduler/gscheduler.o 00:02:32.413 CC module/keyring/linux/keyring.o 00:02:32.413 CC module/accel/ioat/accel_ioat.o 00:02:32.413 CC module/keyring/linux/keyring_rpc.o 00:02:32.413 CC module/accel/ioat/accel_ioat_rpc.o 00:02:32.413 CC module/accel/iaa/accel_iaa.o 00:02:32.413 CC module/accel/iaa/accel_iaa_rpc.o 00:02:32.413 CC module/fsdev/aio/fsdev_aio.o 00:02:32.413 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:32.413 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:32.413 CC module/fsdev/aio/linux_aio_mgr.o 00:02:32.413 CC module/accel/dsa/accel_dsa.o 00:02:32.413 CC module/keyring/file/keyring.o 00:02:32.413 CC module/accel/dsa/accel_dsa_rpc.o 00:02:32.413 CC module/accel/error/accel_error.o 00:02:32.413 CC module/accel/error/accel_error_rpc.o 00:02:32.413 CC module/keyring/file/keyring_rpc.o 00:02:32.413 CC module/blob/bdev/blob_bdev.o 00:02:32.671 LIB libspdk_env_dpdk_rpc.a 00:02:32.671 SO libspdk_env_dpdk_rpc.so.6.0 00:02:32.671 LIB libspdk_keyring_linux.a 00:02:32.671 SYMLINK libspdk_env_dpdk_rpc.so 00:02:32.671 LIB libspdk_scheduler_dpdk_governor.a 00:02:32.671 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:32.671 LIB libspdk_scheduler_gscheduler.a 00:02:32.671 SO libspdk_keyring_linux.so.1.0 00:02:32.671 LIB libspdk_accel_error.a 00:02:32.671 LIB libspdk_scheduler_dynamic.a 00:02:32.671 SO libspdk_scheduler_gscheduler.so.4.0 00:02:32.671 LIB libspdk_accel_iaa.a 00:02:32.671 SO libspdk_accel_error.so.2.0 00:02:32.671 SO libspdk_scheduler_dynamic.so.4.0 00:02:32.671 LIB libspdk_keyring_file.a 00:02:32.671 SYMLINK libspdk_keyring_linux.so 00:02:32.671 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:32.930 SO libspdk_accel_iaa.so.3.0 00:02:32.930 SYMLINK libspdk_scheduler_gscheduler.so 00:02:32.930 SO libspdk_keyring_file.so.2.0 00:02:32.930 SYMLINK libspdk_accel_error.so 00:02:32.930 SYMLINK libspdk_scheduler_dynamic.so 00:02:32.930 SYMLINK libspdk_accel_iaa.so 00:02:32.930 LIB libspdk_accel_ioat.a 00:02:32.930 SYMLINK libspdk_keyring_file.so 00:02:32.930 LIB libspdk_blob_bdev.a 00:02:32.930 LIB libspdk_accel_dsa.a 00:02:32.930 SO libspdk_accel_ioat.so.6.0 00:02:32.930 SO libspdk_blob_bdev.so.11.0 00:02:32.930 SO libspdk_accel_dsa.so.5.0 00:02:32.930 SYMLINK libspdk_blob_bdev.so 00:02:32.930 SYMLINK libspdk_accel_ioat.so 00:02:32.930 SYMLINK libspdk_accel_dsa.so 00:02:33.197 LIB libspdk_vfu_device.a 00:02:33.197 SO libspdk_vfu_device.so.3.0 00:02:33.197 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:33.197 CC module/bdev/delay/vbdev_delay.o 00:02:33.197 CC module/bdev/null/bdev_null.o 00:02:33.197 CC module/bdev/gpt/gpt.o 00:02:33.197 CC module/bdev/null/bdev_null_rpc.o 00:02:33.197 CC module/bdev/gpt/vbdev_gpt.o 00:02:33.197 CC module/bdev/error/vbdev_error.o 00:02:33.197 CC module/bdev/error/vbdev_error_rpc.o 00:02:33.197 CC module/bdev/lvol/vbdev_lvol.o 00:02:33.197 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:33.197 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:33.197 CC module/bdev/malloc/bdev_malloc.o 00:02:33.197 CC module/bdev/nvme/bdev_nvme.o 00:02:33.197 CC module/bdev/nvme/nvme_rpc.o 00:02:33.197 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:33.197 CC module/bdev/nvme/bdev_mdns_client.o 00:02:33.197 CC module/bdev/nvme/vbdev_opal.o 00:02:33.197 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:33.197 CC module/blobfs/bdev/blobfs_bdev.o 00:02:33.197 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:33.197 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:33.197 CC module/bdev/aio/bdev_aio.o 00:02:33.197 CC module/bdev/aio/bdev_aio_rpc.o 00:02:33.197 CC module/bdev/passthru/vbdev_passthru.o 00:02:33.197 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:33.197 CC module/bdev/ftl/bdev_ftl.o 00:02:33.197 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:33.197 CC module/bdev/raid/bdev_raid.o 00:02:33.197 CC module/bdev/raid/bdev_raid_rpc.o 00:02:33.197 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:33.197 CC module/bdev/raid/bdev_raid_sb.o 00:02:33.197 CC module/bdev/raid/raid0.o 00:02:33.197 CC module/bdev/raid/raid1.o 00:02:33.197 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:33.197 CC module/bdev/iscsi/bdev_iscsi.o 00:02:33.197 CC module/bdev/raid/concat.o 00:02:33.197 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:33.197 CC module/bdev/split/vbdev_split.o 00:02:33.197 CC module/bdev/split/vbdev_split_rpc.o 00:02:33.197 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:33.197 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:33.197 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:33.456 LIB libspdk_fsdev_aio.a 00:02:33.456 SYMLINK libspdk_vfu_device.so 00:02:33.456 SO libspdk_fsdev_aio.so.1.0 00:02:33.456 SYMLINK libspdk_fsdev_aio.so 00:02:33.719 LIB libspdk_sock_posix.a 00:02:33.720 LIB libspdk_blobfs_bdev.a 00:02:33.720 SO libspdk_sock_posix.so.6.0 00:02:33.720 SO libspdk_blobfs_bdev.so.6.0 00:02:33.720 LIB libspdk_bdev_split.a 00:02:33.720 SYMLINK libspdk_blobfs_bdev.so 00:02:33.720 SYMLINK libspdk_sock_posix.so 00:02:33.720 SO libspdk_bdev_split.so.6.0 00:02:33.720 LIB libspdk_bdev_error.a 00:02:33.720 LIB libspdk_bdev_passthru.a 00:02:33.720 LIB libspdk_bdev_null.a 00:02:33.720 SO libspdk_bdev_error.so.6.0 00:02:33.720 SO libspdk_bdev_passthru.so.6.0 00:02:33.720 SO libspdk_bdev_null.so.6.0 00:02:33.720 LIB libspdk_bdev_gpt.a 00:02:33.720 SYMLINK libspdk_bdev_split.so 00:02:33.983 LIB libspdk_bdev_iscsi.a 00:02:33.983 SO libspdk_bdev_gpt.so.6.0 00:02:33.984 LIB libspdk_bdev_ftl.a 00:02:33.984 SYMLINK libspdk_bdev_error.so 00:02:33.984 LIB libspdk_bdev_zone_block.a 00:02:33.984 LIB libspdk_bdev_delay.a 00:02:33.984 SYMLINK libspdk_bdev_null.so 00:02:33.984 SO libspdk_bdev_iscsi.so.6.0 00:02:33.984 SO libspdk_bdev_ftl.so.6.0 00:02:33.984 SYMLINK libspdk_bdev_passthru.so 00:02:33.984 SO libspdk_bdev_delay.so.6.0 00:02:33.984 SO libspdk_bdev_zone_block.so.6.0 00:02:33.984 LIB libspdk_bdev_malloc.a 00:02:33.984 LIB libspdk_bdev_aio.a 00:02:33.984 SYMLINK libspdk_bdev_gpt.so 00:02:33.984 LIB libspdk_bdev_lvol.a 00:02:33.984 SO libspdk_bdev_malloc.so.6.0 00:02:33.984 SYMLINK libspdk_bdev_delay.so 00:02:33.984 SO libspdk_bdev_aio.so.6.0 00:02:33.984 SYMLINK libspdk_bdev_iscsi.so 00:02:33.984 SO libspdk_bdev_lvol.so.6.0 00:02:33.984 SYMLINK libspdk_bdev_ftl.so 00:02:33.984 SYMLINK libspdk_bdev_malloc.so 00:02:33.984 SYMLINK libspdk_bdev_zone_block.so 00:02:33.984 SYMLINK libspdk_bdev_aio.so 00:02:33.984 SYMLINK libspdk_bdev_lvol.so 00:02:34.243 LIB libspdk_bdev_virtio.a 00:02:34.243 SO libspdk_bdev_virtio.so.6.0 00:02:34.243 SYMLINK libspdk_bdev_virtio.so 00:02:35.623 LIB libspdk_bdev_raid.a 00:02:35.623 SO libspdk_bdev_raid.so.6.0 00:02:35.623 SYMLINK libspdk_bdev_raid.so 00:02:38.161 LIB libspdk_bdev_nvme.a 00:02:38.161 SO libspdk_bdev_nvme.so.7.0 00:02:38.161 SYMLINK libspdk_bdev_nvme.so 00:02:38.420 CC module/event/subsystems/iobuf/iobuf.o 00:02:38.420 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:38.420 CC module/event/subsystems/keyring/keyring.o 00:02:38.420 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:38.420 CC module/event/subsystems/sock/sock.o 00:02:38.420 CC module/event/subsystems/vmd/vmd.o 00:02:38.420 CC module/event/subsystems/fsdev/fsdev.o 00:02:38.420 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:38.420 CC module/event/subsystems/scheduler/scheduler.o 00:02:38.420 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:38.420 LIB libspdk_event_keyring.a 00:02:38.420 LIB libspdk_event_fsdev.a 00:02:38.420 LIB libspdk_event_vmd.a 00:02:38.420 LIB libspdk_event_sock.a 00:02:38.420 SO libspdk_event_keyring.so.1.0 00:02:38.420 SO libspdk_event_fsdev.so.1.0 00:02:38.420 SO libspdk_event_sock.so.5.0 00:02:38.420 LIB libspdk_event_vhost_blk.a 00:02:38.420 LIB libspdk_event_vfu_tgt.a 00:02:38.420 SO libspdk_event_vmd.so.6.0 00:02:38.420 LIB libspdk_event_scheduler.a 00:02:38.679 SO libspdk_event_vfu_tgt.so.3.0 00:02:38.679 SO libspdk_event_vhost_blk.so.3.0 00:02:38.679 SYMLINK libspdk_event_keyring.so 00:02:38.679 LIB libspdk_event_iobuf.a 00:02:38.679 SYMLINK libspdk_event_fsdev.so 00:02:38.679 SYMLINK libspdk_event_sock.so 00:02:38.679 SO libspdk_event_scheduler.so.4.0 00:02:38.679 SYMLINK libspdk_event_vfu_tgt.so 00:02:38.679 SYMLINK libspdk_event_vhost_blk.so 00:02:38.679 SO libspdk_event_iobuf.so.3.0 00:02:38.679 SYMLINK libspdk_event_vmd.so 00:02:38.679 SYMLINK libspdk_event_scheduler.so 00:02:38.679 SYMLINK libspdk_event_iobuf.so 00:02:38.938 CC module/event/subsystems/accel/accel.o 00:02:39.197 LIB libspdk_event_accel.a 00:02:39.197 SO libspdk_event_accel.so.6.0 00:02:39.455 SYMLINK libspdk_event_accel.so 00:02:39.714 CC module/event/subsystems/bdev/bdev.o 00:02:39.981 LIB libspdk_event_bdev.a 00:02:39.981 SO libspdk_event_bdev.so.6.0 00:02:39.982 SYMLINK libspdk_event_bdev.so 00:02:40.245 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:40.245 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:40.245 CC module/event/subsystems/nbd/nbd.o 00:02:40.245 CC module/event/subsystems/ublk/ublk.o 00:02:40.245 CC module/event/subsystems/scsi/scsi.o 00:02:40.504 LIB libspdk_event_nbd.a 00:02:40.504 LIB libspdk_event_ublk.a 00:02:40.504 SO libspdk_event_ublk.so.3.0 00:02:40.504 SO libspdk_event_nbd.so.6.0 00:02:40.504 LIB libspdk_event_scsi.a 00:02:40.504 SO libspdk_event_scsi.so.6.0 00:02:40.504 SYMLINK libspdk_event_nbd.so 00:02:40.504 SYMLINK libspdk_event_ublk.so 00:02:40.763 LIB libspdk_event_nvmf.a 00:02:40.763 SYMLINK libspdk_event_scsi.so 00:02:40.763 SO libspdk_event_nvmf.so.6.0 00:02:40.763 SYMLINK libspdk_event_nvmf.so 00:02:41.022 CC module/event/subsystems/iscsi/iscsi.o 00:02:41.022 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:41.281 LIB libspdk_event_vhost_scsi.a 00:02:41.281 SO libspdk_event_vhost_scsi.so.3.0 00:02:41.281 LIB libspdk_event_iscsi.a 00:02:41.281 SYMLINK libspdk_event_vhost_scsi.so 00:02:41.281 SO libspdk_event_iscsi.so.6.0 00:02:41.560 SYMLINK libspdk_event_iscsi.so 00:02:41.560 SO libspdk.so.6.0 00:02:41.560 SYMLINK libspdk.so 00:02:41.925 CC app/trace_record/trace_record.o 00:02:41.925 CC test/rpc_client/rpc_client_test.o 00:02:41.925 CXX app/trace/trace.o 00:02:41.925 CC app/spdk_nvme_identify/identify.o 00:02:41.925 CC app/spdk_top/spdk_top.o 00:02:41.925 CC app/spdk_nvme_perf/perf.o 00:02:41.925 CC app/spdk_lspci/spdk_lspci.o 00:02:41.925 TEST_HEADER include/spdk/accel.h 00:02:41.925 TEST_HEADER include/spdk/accel_module.h 00:02:41.925 TEST_HEADER include/spdk/assert.h 00:02:41.925 CC app/spdk_nvme_discover/discovery_aer.o 00:02:41.925 TEST_HEADER include/spdk/barrier.h 00:02:41.925 TEST_HEADER include/spdk/base64.h 00:02:41.925 TEST_HEADER include/spdk/bdev.h 00:02:41.925 TEST_HEADER include/spdk/bdev_module.h 00:02:41.925 TEST_HEADER include/spdk/bdev_zone.h 00:02:41.925 TEST_HEADER include/spdk/bit_array.h 00:02:41.925 TEST_HEADER include/spdk/bit_pool.h 00:02:41.925 TEST_HEADER include/spdk/blob_bdev.h 00:02:41.925 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:41.925 TEST_HEADER include/spdk/blobfs.h 00:02:41.925 TEST_HEADER include/spdk/blob.h 00:02:41.925 TEST_HEADER include/spdk/conf.h 00:02:41.925 TEST_HEADER include/spdk/config.h 00:02:41.925 TEST_HEADER include/spdk/cpuset.h 00:02:41.925 TEST_HEADER include/spdk/crc16.h 00:02:41.925 TEST_HEADER include/spdk/crc32.h 00:02:41.925 TEST_HEADER include/spdk/crc64.h 00:02:41.925 TEST_HEADER include/spdk/dif.h 00:02:41.925 TEST_HEADER include/spdk/dma.h 00:02:41.925 TEST_HEADER include/spdk/endian.h 00:02:41.925 TEST_HEADER include/spdk/env_dpdk.h 00:02:41.925 TEST_HEADER include/spdk/event.h 00:02:41.925 TEST_HEADER include/spdk/env.h 00:02:41.925 TEST_HEADER include/spdk/fd_group.h 00:02:41.925 TEST_HEADER include/spdk/file.h 00:02:41.925 TEST_HEADER include/spdk/fd.h 00:02:41.925 TEST_HEADER include/spdk/fsdev.h 00:02:41.925 TEST_HEADER include/spdk/fsdev_module.h 00:02:41.925 TEST_HEADER include/spdk/ftl.h 00:02:41.925 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:41.925 TEST_HEADER include/spdk/gpt_spec.h 00:02:41.925 TEST_HEADER include/spdk/hexlify.h 00:02:41.925 TEST_HEADER include/spdk/histogram_data.h 00:02:41.925 TEST_HEADER include/spdk/idxd_spec.h 00:02:41.925 TEST_HEADER include/spdk/idxd.h 00:02:41.925 TEST_HEADER include/spdk/init.h 00:02:41.925 TEST_HEADER include/spdk/ioat_spec.h 00:02:41.925 TEST_HEADER include/spdk/ioat.h 00:02:41.925 TEST_HEADER include/spdk/json.h 00:02:41.925 TEST_HEADER include/spdk/iscsi_spec.h 00:02:41.925 TEST_HEADER include/spdk/jsonrpc.h 00:02:41.925 TEST_HEADER include/spdk/keyring_module.h 00:02:41.925 TEST_HEADER include/spdk/keyring.h 00:02:41.925 TEST_HEADER include/spdk/likely.h 00:02:41.925 TEST_HEADER include/spdk/log.h 00:02:41.925 TEST_HEADER include/spdk/lvol.h 00:02:41.925 TEST_HEADER include/spdk/md5.h 00:02:41.925 TEST_HEADER include/spdk/memory.h 00:02:41.925 TEST_HEADER include/spdk/mmio.h 00:02:41.925 TEST_HEADER include/spdk/net.h 00:02:41.925 TEST_HEADER include/spdk/nbd.h 00:02:41.925 TEST_HEADER include/spdk/notify.h 00:02:41.925 TEST_HEADER include/spdk/nvme.h 00:02:41.925 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:41.925 TEST_HEADER include/spdk/nvme_intel.h 00:02:41.925 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:41.925 TEST_HEADER include/spdk/nvme_spec.h 00:02:41.925 TEST_HEADER include/spdk/nvme_zns.h 00:02:41.925 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:41.925 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:41.925 TEST_HEADER include/spdk/nvmf.h 00:02:41.925 TEST_HEADER include/spdk/nvmf_spec.h 00:02:41.925 TEST_HEADER include/spdk/nvmf_transport.h 00:02:41.925 TEST_HEADER include/spdk/opal.h 00:02:41.925 TEST_HEADER include/spdk/opal_spec.h 00:02:41.925 TEST_HEADER include/spdk/pci_ids.h 00:02:41.925 TEST_HEADER include/spdk/pipe.h 00:02:41.925 TEST_HEADER include/spdk/queue.h 00:02:41.925 TEST_HEADER include/spdk/reduce.h 00:02:41.925 TEST_HEADER include/spdk/rpc.h 00:02:41.925 TEST_HEADER include/spdk/scsi.h 00:02:41.925 TEST_HEADER include/spdk/scheduler.h 00:02:41.925 TEST_HEADER include/spdk/scsi_spec.h 00:02:41.925 TEST_HEADER include/spdk/sock.h 00:02:41.925 TEST_HEADER include/spdk/stdinc.h 00:02:41.925 TEST_HEADER include/spdk/string.h 00:02:41.925 TEST_HEADER include/spdk/thread.h 00:02:41.925 TEST_HEADER include/spdk/trace_parser.h 00:02:41.925 TEST_HEADER include/spdk/trace.h 00:02:41.925 TEST_HEADER include/spdk/tree.h 00:02:41.925 TEST_HEADER include/spdk/ublk.h 00:02:41.925 TEST_HEADER include/spdk/util.h 00:02:41.925 TEST_HEADER include/spdk/uuid.h 00:02:41.925 TEST_HEADER include/spdk/version.h 00:02:41.925 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:41.925 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:41.925 TEST_HEADER include/spdk/vhost.h 00:02:41.925 TEST_HEADER include/spdk/vmd.h 00:02:41.925 TEST_HEADER include/spdk/xor.h 00:02:41.925 TEST_HEADER include/spdk/zipf.h 00:02:41.925 CXX test/cpp_headers/accel.o 00:02:41.925 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:41.925 CXX test/cpp_headers/accel_module.o 00:02:41.925 CXX test/cpp_headers/assert.o 00:02:41.925 CXX test/cpp_headers/barrier.o 00:02:41.925 CXX test/cpp_headers/base64.o 00:02:41.925 CXX test/cpp_headers/bdev.o 00:02:41.925 CC app/spdk_dd/spdk_dd.o 00:02:41.925 CXX test/cpp_headers/bdev_module.o 00:02:41.925 CXX test/cpp_headers/bdev_zone.o 00:02:41.925 CXX test/cpp_headers/bit_array.o 00:02:41.925 CXX test/cpp_headers/bit_pool.o 00:02:41.925 CXX test/cpp_headers/blob_bdev.o 00:02:41.925 CXX test/cpp_headers/blobfs_bdev.o 00:02:41.925 CXX test/cpp_headers/blobfs.o 00:02:41.925 CXX test/cpp_headers/blob.o 00:02:41.925 CXX test/cpp_headers/conf.o 00:02:41.925 CXX test/cpp_headers/config.o 00:02:41.925 CXX test/cpp_headers/cpuset.o 00:02:41.925 CXX test/cpp_headers/crc16.o 00:02:41.925 CC app/iscsi_tgt/iscsi_tgt.o 00:02:41.925 CC app/nvmf_tgt/nvmf_main.o 00:02:41.925 CC app/spdk_tgt/spdk_tgt.o 00:02:41.925 CC test/app/histogram_perf/histogram_perf.o 00:02:41.925 CC test/thread/poller_perf/poller_perf.o 00:02:41.925 CC test/app/jsoncat/jsoncat.o 00:02:41.925 CC test/app/stub/stub.o 00:02:41.925 CC examples/util/zipf/zipf.o 00:02:41.925 CC test/env/pci/pci_ut.o 00:02:41.925 CC app/fio/nvme/fio_plugin.o 00:02:41.925 CC examples/ioat/verify/verify.o 00:02:41.925 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:41.925 CC test/env/vtophys/vtophys.o 00:02:41.925 CC test/env/memory/memory_ut.o 00:02:41.925 CC examples/ioat/perf/perf.o 00:02:42.197 CC test/app/bdev_svc/bdev_svc.o 00:02:42.197 CC test/dma/test_dma/test_dma.o 00:02:42.197 CC app/fio/bdev/fio_plugin.o 00:02:42.197 LINK spdk_lspci 00:02:42.197 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:42.197 CC test/env/mem_callbacks/mem_callbacks.o 00:02:42.197 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:42.197 LINK spdk_nvme_discover 00:02:42.197 LINK rpc_client_test 00:02:42.197 LINK poller_perf 00:02:42.197 LINK interrupt_tgt 00:02:42.197 LINK histogram_perf 00:02:42.462 CXX test/cpp_headers/crc32.o 00:02:42.462 CXX test/cpp_headers/crc64.o 00:02:42.462 LINK vtophys 00:02:42.462 LINK jsoncat 00:02:42.462 LINK zipf 00:02:42.462 CXX test/cpp_headers/dif.o 00:02:42.462 CXX test/cpp_headers/dma.o 00:02:42.462 LINK spdk_trace_record 00:02:42.462 LINK stub 00:02:42.462 CXX test/cpp_headers/endian.o 00:02:42.462 CXX test/cpp_headers/env_dpdk.o 00:02:42.462 LINK nvmf_tgt 00:02:42.462 CXX test/cpp_headers/env.o 00:02:42.462 CXX test/cpp_headers/event.o 00:02:42.462 CXX test/cpp_headers/fd_group.o 00:02:42.462 LINK env_dpdk_post_init 00:02:42.462 CXX test/cpp_headers/fd.o 00:02:42.462 CXX test/cpp_headers/file.o 00:02:42.462 CXX test/cpp_headers/fsdev.o 00:02:42.462 CXX test/cpp_headers/fsdev_module.o 00:02:42.462 LINK spdk_tgt 00:02:42.462 LINK iscsi_tgt 00:02:42.462 CXX test/cpp_headers/ftl.o 00:02:42.462 CXX test/cpp_headers/fuse_dispatcher.o 00:02:42.462 LINK bdev_svc 00:02:42.462 LINK ioat_perf 00:02:42.462 CXX test/cpp_headers/gpt_spec.o 00:02:42.462 CXX test/cpp_headers/hexlify.o 00:02:42.462 LINK verify 00:02:42.462 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:42.728 CXX test/cpp_headers/histogram_data.o 00:02:42.728 CXX test/cpp_headers/idxd.o 00:02:42.728 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:42.728 CXX test/cpp_headers/idxd_spec.o 00:02:42.728 CXX test/cpp_headers/init.o 00:02:42.728 CXX test/cpp_headers/ioat.o 00:02:42.728 CXX test/cpp_headers/ioat_spec.o 00:02:42.728 CXX test/cpp_headers/iscsi_spec.o 00:02:42.728 LINK spdk_dd 00:02:42.728 CXX test/cpp_headers/json.o 00:02:42.728 CXX test/cpp_headers/jsonrpc.o 00:02:42.728 CXX test/cpp_headers/keyring.o 00:02:42.728 LINK spdk_trace 00:02:42.728 CXX test/cpp_headers/keyring_module.o 00:02:42.728 CXX test/cpp_headers/likely.o 00:02:42.728 CXX test/cpp_headers/log.o 00:02:42.728 CXX test/cpp_headers/lvol.o 00:02:42.999 CXX test/cpp_headers/md5.o 00:02:42.999 CXX test/cpp_headers/memory.o 00:02:42.999 CXX test/cpp_headers/mmio.o 00:02:42.999 CXX test/cpp_headers/nbd.o 00:02:42.999 CXX test/cpp_headers/net.o 00:02:42.999 LINK pci_ut 00:02:42.999 CXX test/cpp_headers/notify.o 00:02:42.999 CXX test/cpp_headers/nvme.o 00:02:42.999 CXX test/cpp_headers/nvme_intel.o 00:02:42.999 CXX test/cpp_headers/nvme_ocssd.o 00:02:42.999 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:42.999 CXX test/cpp_headers/nvme_spec.o 00:02:42.999 CXX test/cpp_headers/nvme_zns.o 00:02:42.999 CXX test/cpp_headers/nvmf_cmd.o 00:02:42.999 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:42.999 CC test/event/event_perf/event_perf.o 00:02:42.999 CXX test/cpp_headers/nvmf.o 00:02:42.999 LINK nvme_fuzz 00:02:42.999 CXX test/cpp_headers/nvmf_spec.o 00:02:42.999 CC test/event/reactor_perf/reactor_perf.o 00:02:42.999 CC test/event/reactor/reactor.o 00:02:42.999 CXX test/cpp_headers/nvmf_transport.o 00:02:42.999 LINK test_dma 00:02:42.999 CC test/event/app_repeat/app_repeat.o 00:02:42.999 LINK spdk_nvme 00:02:42.999 CC examples/sock/hello_world/hello_sock.o 00:02:43.261 CC examples/thread/thread/thread_ex.o 00:02:43.261 CXX test/cpp_headers/opal.o 00:02:43.261 CXX test/cpp_headers/opal_spec.o 00:02:43.261 CXX test/cpp_headers/pci_ids.o 00:02:43.261 CC examples/vmd/lsvmd/lsvmd.o 00:02:43.261 CC test/event/scheduler/scheduler.o 00:02:43.261 LINK spdk_bdev 00:02:43.261 CC examples/idxd/perf/perf.o 00:02:43.261 CXX test/cpp_headers/pipe.o 00:02:43.261 CXX test/cpp_headers/queue.o 00:02:43.261 CXX test/cpp_headers/reduce.o 00:02:43.261 CXX test/cpp_headers/rpc.o 00:02:43.261 CXX test/cpp_headers/scheduler.o 00:02:43.261 CXX test/cpp_headers/scsi.o 00:02:43.261 CXX test/cpp_headers/scsi_spec.o 00:02:43.261 CXX test/cpp_headers/sock.o 00:02:43.261 CXX test/cpp_headers/stdinc.o 00:02:43.261 CXX test/cpp_headers/string.o 00:02:43.261 CXX test/cpp_headers/thread.o 00:02:43.261 CC examples/vmd/led/led.o 00:02:43.261 CXX test/cpp_headers/trace.o 00:02:43.261 CXX test/cpp_headers/trace_parser.o 00:02:43.261 CXX test/cpp_headers/tree.o 00:02:43.524 LINK reactor 00:02:43.524 CXX test/cpp_headers/ublk.o 00:02:43.524 LINK reactor_perf 00:02:43.524 CXX test/cpp_headers/util.o 00:02:43.524 CXX test/cpp_headers/uuid.o 00:02:43.524 CXX test/cpp_headers/version.o 00:02:43.524 LINK event_perf 00:02:43.524 CXX test/cpp_headers/vfio_user_pci.o 00:02:43.524 CXX test/cpp_headers/vfio_user_spec.o 00:02:43.524 LINK spdk_nvme_perf 00:02:43.524 LINK mem_callbacks 00:02:43.524 LINK app_repeat 00:02:43.524 CXX test/cpp_headers/vhost.o 00:02:43.524 CXX test/cpp_headers/vmd.o 00:02:43.524 CC app/vhost/vhost.o 00:02:43.524 CXX test/cpp_headers/xor.o 00:02:43.524 CXX test/cpp_headers/zipf.o 00:02:43.524 LINK lsvmd 00:02:43.524 LINK spdk_nvme_identify 00:02:43.524 LINK spdk_top 00:02:43.524 LINK vhost_fuzz 00:02:43.524 LINK scheduler 00:02:43.524 LINK hello_sock 00:02:43.783 LINK led 00:02:43.783 LINK thread 00:02:43.783 CC test/nvme/reset/reset.o 00:02:43.783 CC test/nvme/startup/startup.o 00:02:43.783 CC test/nvme/err_injection/err_injection.o 00:02:43.783 CC test/nvme/aer/aer.o 00:02:43.783 CC test/nvme/overhead/overhead.o 00:02:43.783 CC test/nvme/e2edp/nvme_dp.o 00:02:43.783 CC test/nvme/sgl/sgl.o 00:02:43.783 CC test/nvme/reserve/reserve.o 00:02:43.783 CC test/nvme/simple_copy/simple_copy.o 00:02:43.783 CC test/nvme/boot_partition/boot_partition.o 00:02:43.783 CC test/nvme/connect_stress/connect_stress.o 00:02:43.783 CC test/nvme/compliance/nvme_compliance.o 00:02:43.783 CC test/nvme/fdp/fdp.o 00:02:43.783 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:43.783 CC test/nvme/fused_ordering/fused_ordering.o 00:02:43.783 CC test/nvme/cuse/cuse.o 00:02:43.783 LINK vhost 00:02:43.783 LINK idxd_perf 00:02:43.783 CC test/accel/dif/dif.o 00:02:43.783 CC test/blobfs/mkfs/mkfs.o 00:02:43.783 CC test/lvol/esnap/esnap.o 00:02:44.042 LINK connect_stress 00:02:44.042 LINK reserve 00:02:44.042 CC examples/nvme/hello_world/hello_world.o 00:02:44.042 CC examples/nvme/arbitration/arbitration.o 00:02:44.042 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:44.042 CC examples/nvme/hotplug/hotplug.o 00:02:44.042 LINK doorbell_aers 00:02:44.042 CC examples/nvme/reconnect/reconnect.o 00:02:44.042 CC examples/nvme/abort/abort.o 00:02:44.042 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:44.042 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:44.042 LINK fused_ordering 00:02:44.042 LINK memory_ut 00:02:44.300 LINK startup 00:02:44.300 LINK boot_partition 00:02:44.300 LINK sgl 00:02:44.300 LINK err_injection 00:02:44.300 LINK overhead 00:02:44.300 LINK mkfs 00:02:44.300 LINK nvme_dp 00:02:44.300 LINK aer 00:02:44.300 CC examples/accel/perf/accel_perf.o 00:02:44.300 LINK reset 00:02:44.300 CC examples/blob/cli/blobcli.o 00:02:44.300 LINK simple_copy 00:02:44.300 LINK fdp 00:02:44.300 CC examples/blob/hello_world/hello_blob.o 00:02:44.300 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:44.300 LINK nvme_compliance 00:02:44.559 LINK hello_world 00:02:44.559 LINK hotplug 00:02:44.559 LINK cmb_copy 00:02:44.559 LINK pmr_persistence 00:02:44.559 LINK reconnect 00:02:44.818 LINK hello_blob 00:02:44.818 LINK abort 00:02:44.818 LINK dif 00:02:44.818 LINK arbitration 00:02:44.818 LINK hello_fsdev 00:02:44.818 LINK iscsi_fuzz 00:02:44.818 LINK accel_perf 00:02:44.818 LINK nvme_manage 00:02:45.078 LINK blobcli 00:02:45.078 CC test/bdev/bdevio/bdevio.o 00:02:45.339 CC examples/bdev/hello_world/hello_bdev.o 00:02:45.339 CC examples/bdev/bdevperf/bdevperf.o 00:02:45.600 LINK hello_bdev 00:02:45.600 LINK bdevio 00:02:46.173 LINK cuse 00:02:46.743 LINK bdevperf 00:02:47.313 CC examples/nvmf/nvmf/nvmf.o 00:02:47.574 LINK nvmf 00:02:57.570 LINK esnap 00:02:57.570 00:02:57.570 real 1m33.255s 00:02:57.570 user 12m47.942s 00:02:57.570 sys 2m47.249s 00:02:57.570 18:13:24 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:57.570 18:13:24 make -- common/autotest_common.sh@10 -- $ set +x 00:02:57.570 ************************************ 00:02:57.570 END TEST make 00:02:57.570 ************************************ 00:02:57.570 18:13:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:57.570 18:13:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:57.570 18:13:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:57.570 18:13:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.570 18:13:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:57.570 18:13:24 -- pm/common@44 -- $ pid=979754 00:02:57.570 18:13:24 -- pm/common@50 -- $ kill -TERM 979754 00:02:57.570 18:13:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.570 18:13:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:57.570 18:13:24 -- pm/common@44 -- $ pid=979756 00:02:57.570 18:13:24 -- pm/common@50 -- $ kill -TERM 979756 00:02:57.570 18:13:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.570 18:13:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:57.570 18:13:24 -- pm/common@44 -- $ pid=979758 00:02:57.570 18:13:24 -- pm/common@50 -- $ kill -TERM 979758 00:02:57.570 18:13:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.570 18:13:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:57.570 18:13:24 -- pm/common@44 -- $ pid=979785 00:02:57.570 18:13:24 -- pm/common@50 -- $ sudo -E kill -TERM 979785 00:02:57.570 18:13:25 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:57.570 18:13:25 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:57.570 18:13:25 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:57.570 18:13:25 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:57.570 18:13:25 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:57.570 18:13:25 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:57.570 18:13:25 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:57.570 18:13:25 -- scripts/common.sh@336 -- # IFS=.-: 00:02:57.570 18:13:25 -- scripts/common.sh@336 -- # read -ra ver1 00:02:57.570 18:13:25 -- scripts/common.sh@337 -- # IFS=.-: 00:02:57.570 18:13:25 -- scripts/common.sh@337 -- # read -ra ver2 00:02:57.570 18:13:25 -- scripts/common.sh@338 -- # local 'op=<' 00:02:57.570 18:13:25 -- scripts/common.sh@340 -- # ver1_l=2 00:02:57.570 18:13:25 -- scripts/common.sh@341 -- # ver2_l=1 00:02:57.570 18:13:25 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:57.570 18:13:25 -- scripts/common.sh@344 -- # case "$op" in 00:02:57.570 18:13:25 -- scripts/common.sh@345 -- # : 1 00:02:57.570 18:13:25 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:57.570 18:13:25 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.570 18:13:25 -- scripts/common.sh@365 -- # decimal 1 00:02:57.570 18:13:25 -- scripts/common.sh@353 -- # local d=1 00:02:57.570 18:13:25 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:57.570 18:13:25 -- scripts/common.sh@355 -- # echo 1 00:02:57.570 18:13:25 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:57.570 18:13:25 -- scripts/common.sh@366 -- # decimal 2 00:02:57.570 18:13:25 -- scripts/common.sh@353 -- # local d=2 00:02:57.570 18:13:25 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:57.570 18:13:25 -- scripts/common.sh@355 -- # echo 2 00:02:57.570 18:13:25 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:57.570 18:13:25 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:57.570 18:13:25 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:57.570 18:13:25 -- scripts/common.sh@368 -- # return 0 00:02:57.570 18:13:25 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:57.570 18:13:25 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:57.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.570 --rc genhtml_branch_coverage=1 00:02:57.570 --rc genhtml_function_coverage=1 00:02:57.570 --rc genhtml_legend=1 00:02:57.570 --rc geninfo_all_blocks=1 00:02:57.570 --rc geninfo_unexecuted_blocks=1 00:02:57.570 00:02:57.570 ' 00:02:57.570 18:13:25 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:57.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.570 --rc genhtml_branch_coverage=1 00:02:57.570 --rc genhtml_function_coverage=1 00:02:57.570 --rc genhtml_legend=1 00:02:57.571 --rc geninfo_all_blocks=1 00:02:57.571 --rc geninfo_unexecuted_blocks=1 00:02:57.571 00:02:57.571 ' 00:02:57.571 18:13:25 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:57.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.571 --rc genhtml_branch_coverage=1 00:02:57.571 --rc genhtml_function_coverage=1 00:02:57.571 --rc genhtml_legend=1 00:02:57.571 --rc geninfo_all_blocks=1 00:02:57.571 --rc geninfo_unexecuted_blocks=1 00:02:57.571 00:02:57.571 ' 00:02:57.571 18:13:25 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:57.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.571 --rc genhtml_branch_coverage=1 00:02:57.571 --rc genhtml_function_coverage=1 00:02:57.571 --rc genhtml_legend=1 00:02:57.571 --rc geninfo_all_blocks=1 00:02:57.571 --rc geninfo_unexecuted_blocks=1 00:02:57.571 00:02:57.571 ' 00:02:57.571 18:13:25 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:57.571 18:13:25 -- nvmf/common.sh@7 -- # uname -s 00:02:57.571 18:13:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:57.571 18:13:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:57.571 18:13:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:57.571 18:13:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:57.571 18:13:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:57.571 18:13:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:57.571 18:13:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:57.571 18:13:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:57.571 18:13:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:57.571 18:13:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:57.571 18:13:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:02:57.571 18:13:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:02:57.571 18:13:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:57.571 18:13:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:57.571 18:13:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:57.571 18:13:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:57.571 18:13:25 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:57.571 18:13:25 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:57.571 18:13:25 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:57.571 18:13:25 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:57.571 18:13:25 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:57.571 18:13:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.571 18:13:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.571 18:13:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.571 18:13:25 -- paths/export.sh@5 -- # export PATH 00:02:57.571 18:13:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.571 18:13:25 -- nvmf/common.sh@51 -- # : 0 00:02:57.571 18:13:25 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:57.571 18:13:25 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:57.571 18:13:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:57.571 18:13:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:57.571 18:13:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:57.571 18:13:25 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:57.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:57.571 18:13:25 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:57.571 18:13:25 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:57.571 18:13:25 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:57.571 18:13:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:57.571 18:13:25 -- spdk/autotest.sh@32 -- # uname -s 00:02:57.571 18:13:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:57.571 18:13:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:57.571 18:13:25 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:57.571 18:13:25 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:57.571 18:13:25 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:57.571 18:13:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:57.571 18:13:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:57.571 18:13:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:57.571 18:13:25 -- spdk/autotest.sh@48 -- # udevadm_pid=1043822 00:02:57.571 18:13:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:57.571 18:13:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:57.571 18:13:25 -- pm/common@17 -- # local monitor 00:02:57.571 18:13:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.571 18:13:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.571 18:13:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.571 18:13:25 -- pm/common@21 -- # date +%s 00:02:57.571 18:13:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.571 18:13:25 -- pm/common@21 -- # date +%s 00:02:57.571 18:13:25 -- pm/common@25 -- # sleep 1 00:02:57.571 18:13:25 -- pm/common@21 -- # date +%s 00:02:57.571 18:13:25 -- pm/common@21 -- # date +%s 00:02:57.571 18:13:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728404005 00:02:57.571 18:13:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728404005 00:02:57.571 18:13:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728404005 00:02:57.571 18:13:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728404005 00:02:57.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728404005_collect-cpu-load.pm.log 00:02:57.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728404005_collect-cpu-temp.pm.log 00:02:57.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728404005_collect-vmstat.pm.log 00:02:57.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728404005_collect-bmc-pm.bmc.pm.log 00:02:58.142 18:13:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:58.142 18:13:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:58.142 18:13:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:58.142 18:13:26 -- common/autotest_common.sh@10 -- # set +x 00:02:58.142 18:13:26 -- spdk/autotest.sh@59 -- # create_test_list 00:02:58.142 18:13:26 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:58.142 18:13:26 -- common/autotest_common.sh@10 -- # set +x 00:02:58.142 18:13:26 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:58.142 18:13:26 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:58.142 18:13:26 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:58.142 18:13:26 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:58.142 18:13:26 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:58.142 18:13:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:58.142 18:13:26 -- common/autotest_common.sh@1455 -- # uname 00:02:58.142 18:13:26 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:58.142 18:13:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:58.142 18:13:26 -- common/autotest_common.sh@1475 -- # uname 00:02:58.142 18:13:26 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:58.142 18:13:26 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:58.142 18:13:26 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:58.142 lcov: LCOV version 1.15 00:02:58.142 18:13:26 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:16.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:16.249 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:54.993 18:14:20 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:54.993 18:14:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:54.993 18:14:20 -- common/autotest_common.sh@10 -- # set +x 00:03:54.993 18:14:20 -- spdk/autotest.sh@78 -- # rm -f 00:03:54.993 18:14:20 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.993 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:03:54.993 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:54.993 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:54.993 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:54.993 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:54.993 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:54.993 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:54.993 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:54.993 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:54.993 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:54.993 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:54.993 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:54.993 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:54.993 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:54.993 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:54.993 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:54.993 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:54.993 18:14:22 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:54.993 18:14:22 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:54.993 18:14:22 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:54.993 18:14:22 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:54.993 18:14:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:54.993 18:14:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:54.993 18:14:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:54.993 18:14:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.993 18:14:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:54.993 18:14:22 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:54.993 18:14:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.993 18:14:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.993 18:14:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:54.993 18:14:22 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:54.993 18:14:22 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:54.993 No valid GPT data, bailing 00:03:54.993 18:14:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:54.993 18:14:22 -- scripts/common.sh@394 -- # pt= 00:03:54.993 18:14:22 -- scripts/common.sh@395 -- # return 1 00:03:54.993 18:14:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:54.993 1+0 records in 00:03:54.993 1+0 records out 00:03:54.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00234955 s, 446 MB/s 00:03:54.993 18:14:22 -- spdk/autotest.sh@105 -- # sync 00:03:54.993 18:14:22 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:54.993 18:14:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:54.993 18:14:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:56.898 18:14:24 -- spdk/autotest.sh@111 -- # uname -s 00:03:56.898 18:14:24 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:56.898 18:14:24 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:56.898 18:14:24 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:58.275 Hugepages 00:03:58.275 node hugesize free / total 00:03:58.275 node0 1048576kB 0 / 0 00:03:58.275 node0 2048kB 0 / 0 00:03:58.275 node1 1048576kB 0 / 0 00:03:58.275 node1 2048kB 0 / 0 00:03:58.275 00:03:58.275 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.275 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:58.275 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:58.275 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:58.275 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:58.275 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:58.275 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:58.275 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:58.275 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:58.275 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:58.276 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:58.276 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:58.276 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:58.276 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:58.276 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:58.276 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:58.276 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:58.276 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:58.276 18:14:26 -- spdk/autotest.sh@117 -- # uname -s 00:03:58.276 18:14:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:58.276 18:14:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:58.276 18:14:26 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.653 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:59.653 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:59.653 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:59.653 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:59.913 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:59.913 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:59.913 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:59.913 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:59.913 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:59.913 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:59.914 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:59.914 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:59.914 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:59.914 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:59.914 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:59.914 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:00.852 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.852 18:14:29 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:02.248 18:14:30 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:02.248 18:14:30 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:02.248 18:14:30 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:02.248 18:14:30 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:02.248 18:14:30 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:02.248 18:14:30 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:02.248 18:14:30 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.248 18:14:30 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:02.249 18:14:30 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:02.249 18:14:30 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:02.249 18:14:30 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:82:00.0 00:04:02.249 18:14:30 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.629 Waiting for block devices as requested 00:04:03.629 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:04:03.889 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:03.889 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:04.150 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:04.150 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:04.150 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:04.408 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:04.408 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:04.408 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:04.668 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:04.668 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:04.668 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:04.928 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:04.928 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:04.928 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:05.188 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:05.188 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:05.188 18:14:33 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:05.188 18:14:33 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:04:05.188 18:14:33 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:05.188 18:14:33 -- common/autotest_common.sh@1485 -- # grep 0000:82:00.0/nvme/nvme 00:04:05.188 18:14:33 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:04:05.188 18:14:33 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:04:05.188 18:14:33 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:04:05.188 18:14:33 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:05.188 18:14:33 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:05.188 18:14:33 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:05.188 18:14:33 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:05.188 18:14:33 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:05.188 18:14:33 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:05.188 18:14:33 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:04:05.188 18:14:33 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:05.188 18:14:33 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:05.188 18:14:33 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:05.188 18:14:33 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:05.188 18:14:33 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:05.188 18:14:33 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:05.188 18:14:33 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:05.188 18:14:33 -- common/autotest_common.sh@1541 -- # continue 00:04:05.188 18:14:33 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:05.188 18:14:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:05.188 18:14:33 -- common/autotest_common.sh@10 -- # set +x 00:04:05.188 18:14:33 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:05.188 18:14:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:05.188 18:14:33 -- common/autotest_common.sh@10 -- # set +x 00:04:05.188 18:14:33 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.104 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:07.104 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:07.104 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:07.104 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:07.104 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:07.104 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:07.104 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:07.104 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:07.104 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:07.104 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:07.104 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:07.104 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:07.104 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:07.104 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:07.104 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:07.104 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:08.044 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:08.044 18:14:36 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:08.044 18:14:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.044 18:14:36 -- common/autotest_common.sh@10 -- # set +x 00:04:08.044 18:14:36 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:08.044 18:14:36 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:08.044 18:14:36 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:08.044 18:14:36 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:08.044 18:14:36 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:08.044 18:14:36 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:08.044 18:14:36 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:08.044 18:14:36 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:08.044 18:14:36 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:08.044 18:14:36 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:08.044 18:14:36 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:08.044 18:14:36 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:08.044 18:14:36 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:08.304 18:14:36 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:08.304 18:14:36 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:82:00.0 00:04:08.304 18:14:36 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:08.304 18:14:36 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:04:08.304 18:14:36 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:08.304 18:14:36 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:08.304 18:14:36 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:08.304 18:14:36 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:08.304 18:14:36 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:82:00.0 00:04:08.304 18:14:36 -- common/autotest_common.sh@1577 -- # [[ -z 0000:82:00.0 ]] 00:04:08.304 18:14:36 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1056408 00:04:08.304 18:14:36 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.304 18:14:36 -- common/autotest_common.sh@1583 -- # waitforlisten 1056408 00:04:08.304 18:14:36 -- common/autotest_common.sh@831 -- # '[' -z 1056408 ']' 00:04:08.304 18:14:36 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.304 18:14:36 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:08.304 18:14:36 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.304 18:14:36 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:08.304 18:14:36 -- common/autotest_common.sh@10 -- # set +x 00:04:08.304 [2024-10-08 18:14:36.749469] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:04:08.304 [2024-10-08 18:14:36.749646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056408 ] 00:04:08.565 [2024-10-08 18:14:36.895470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.825 [2024-10-08 18:14:37.123311] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.086 18:14:37 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:09.086 18:14:37 -- common/autotest_common.sh@864 -- # return 0 00:04:09.086 18:14:37 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:09.086 18:14:37 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:09.086 18:14:37 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:04:13.287 nvme0n1 00:04:13.287 18:14:40 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:13.287 [2024-10-08 18:14:41.627324] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:13.287 [2024-10-08 18:14:41.627424] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:13.287 request: 00:04:13.287 { 00:04:13.287 "nvme_ctrlr_name": "nvme0", 00:04:13.287 "password": "test", 00:04:13.287 "method": "bdev_nvme_opal_revert", 00:04:13.287 "req_id": 1 00:04:13.287 } 00:04:13.287 Got JSON-RPC error response 00:04:13.287 response: 00:04:13.287 { 00:04:13.287 "code": -32603, 00:04:13.287 "message": "Internal error" 00:04:13.287 } 00:04:13.287 18:14:41 -- common/autotest_common.sh@1589 -- # true 00:04:13.287 18:14:41 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:13.287 18:14:41 -- common/autotest_common.sh@1593 -- # killprocess 1056408 00:04:13.287 18:14:41 -- common/autotest_common.sh@950 -- # '[' -z 1056408 ']' 00:04:13.287 18:14:41 -- common/autotest_common.sh@954 -- # kill -0 1056408 00:04:13.287 18:14:41 -- common/autotest_common.sh@955 -- # uname 00:04:13.287 18:14:41 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:13.287 18:14:41 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1056408 00:04:13.287 18:14:41 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:13.287 18:14:41 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:13.287 18:14:41 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1056408' 00:04:13.287 killing process with pid 1056408 00:04:13.287 18:14:41 -- common/autotest_common.sh@969 -- # kill 1056408 00:04:13.287 18:14:41 -- common/autotest_common.sh@974 -- # wait 1056408 00:04:15.826 18:14:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:15.826 18:14:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:15.826 18:14:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.826 18:14:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.826 18:14:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:15.826 18:14:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:15.826 18:14:43 -- common/autotest_common.sh@10 -- # set +x 00:04:15.826 18:14:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:15.826 18:14:43 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:15.826 18:14:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.826 18:14:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.826 18:14:43 -- common/autotest_common.sh@10 -- # set +x 00:04:15.826 ************************************ 00:04:15.826 START TEST env 00:04:15.826 ************************************ 00:04:15.826 18:14:43 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:15.826 * Looking for test storage... 00:04:15.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:15.826 18:14:43 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:15.826 18:14:43 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:15.826 18:14:43 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:15.826 18:14:44 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:15.826 18:14:44 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.826 18:14:44 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.826 18:14:44 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.826 18:14:44 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.826 18:14:44 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.826 18:14:44 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.826 18:14:44 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.826 18:14:44 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.826 18:14:44 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.826 18:14:44 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.826 18:14:44 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.826 18:14:44 env -- scripts/common.sh@344 -- # case "$op" in 00:04:15.826 18:14:44 env -- scripts/common.sh@345 -- # : 1 00:04:15.826 18:14:44 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.826 18:14:44 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.826 18:14:44 env -- scripts/common.sh@365 -- # decimal 1 00:04:15.826 18:14:44 env -- scripts/common.sh@353 -- # local d=1 00:04:15.826 18:14:44 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.826 18:14:44 env -- scripts/common.sh@355 -- # echo 1 00:04:15.826 18:14:44 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.826 18:14:44 env -- scripts/common.sh@366 -- # decimal 2 00:04:15.826 18:14:44 env -- scripts/common.sh@353 -- # local d=2 00:04:15.826 18:14:44 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.826 18:14:44 env -- scripts/common.sh@355 -- # echo 2 00:04:15.826 18:14:44 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.827 18:14:44 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.827 18:14:44 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.827 18:14:44 env -- scripts/common.sh@368 -- # return 0 00:04:15.827 18:14:44 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.827 18:14:44 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:15.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.827 --rc genhtml_branch_coverage=1 00:04:15.827 --rc genhtml_function_coverage=1 00:04:15.827 --rc genhtml_legend=1 00:04:15.827 --rc geninfo_all_blocks=1 00:04:15.827 --rc geninfo_unexecuted_blocks=1 00:04:15.827 00:04:15.827 ' 00:04:15.827 18:14:44 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:15.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.827 --rc genhtml_branch_coverage=1 00:04:15.827 --rc genhtml_function_coverage=1 00:04:15.827 --rc genhtml_legend=1 00:04:15.827 --rc geninfo_all_blocks=1 00:04:15.827 --rc geninfo_unexecuted_blocks=1 00:04:15.827 00:04:15.827 ' 00:04:15.827 18:14:44 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:15.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.827 --rc genhtml_branch_coverage=1 00:04:15.827 --rc genhtml_function_coverage=1 00:04:15.827 --rc genhtml_legend=1 00:04:15.827 --rc geninfo_all_blocks=1 00:04:15.827 --rc geninfo_unexecuted_blocks=1 00:04:15.827 00:04:15.827 ' 00:04:15.827 18:14:44 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:15.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.827 --rc genhtml_branch_coverage=1 00:04:15.827 --rc genhtml_function_coverage=1 00:04:15.827 --rc genhtml_legend=1 00:04:15.827 --rc geninfo_all_blocks=1 00:04:15.827 --rc geninfo_unexecuted_blocks=1 00:04:15.827 00:04:15.827 ' 00:04:15.827 18:14:44 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.827 18:14:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.827 18:14:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.827 18:14:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.827 ************************************ 00:04:15.827 START TEST env_memory 00:04:15.827 ************************************ 00:04:15.827 18:14:44 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.827 00:04:15.827 00:04:15.827 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.827 http://cunit.sourceforge.net/ 00:04:15.827 00:04:15.827 00:04:15.827 Suite: memory 00:04:15.827 Test: alloc and free memory map ...[2024-10-08 18:14:44.213302] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.827 passed 00:04:15.827 Test: mem map translation ...[2024-10-08 18:14:44.269107] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.827 [2024-10-08 18:14:44.269172] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.827 [2024-10-08 18:14:44.269289] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.827 [2024-10-08 18:14:44.269323] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:16.086 passed 00:04:16.086 Test: mem map registration ...[2024-10-08 18:14:44.387421] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:16.086 [2024-10-08 18:14:44.387477] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:16.086 passed 00:04:16.086 Test: mem map adjacent registrations ...passed 00:04:16.086 00:04:16.086 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.086 suites 1 1 n/a 0 0 00:04:16.086 tests 4 4 4 0 0 00:04:16.086 asserts 152 152 152 0 n/a 00:04:16.086 00:04:16.086 Elapsed time = 0.384 seconds 00:04:16.086 00:04:16.086 real 0m0.400s 00:04:16.086 user 0m0.379s 00:04:16.086 sys 0m0.018s 00:04:16.086 18:14:44 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.086 18:14:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:16.086 ************************************ 00:04:16.086 END TEST env_memory 00:04:16.086 ************************************ 00:04:16.086 18:14:44 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:16.086 18:14:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.086 18:14:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.086 18:14:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.086 ************************************ 00:04:16.086 START TEST env_vtophys 00:04:16.086 ************************************ 00:04:16.086 18:14:44 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:16.346 EAL: lib.eal log level changed from notice to debug 00:04:16.346 EAL: Detected lcore 0 as core 0 on socket 0 00:04:16.346 EAL: Detected lcore 1 as core 1 on socket 0 00:04:16.346 EAL: Detected lcore 2 as core 2 on socket 0 00:04:16.346 EAL: Detected lcore 3 as core 3 on socket 0 00:04:16.346 EAL: Detected lcore 4 as core 4 on socket 0 00:04:16.346 EAL: Detected lcore 5 as core 5 on socket 0 00:04:16.346 EAL: Detected lcore 6 as core 8 on socket 0 00:04:16.346 EAL: Detected lcore 7 as core 9 on socket 0 00:04:16.346 EAL: Detected lcore 8 as core 10 on socket 0 00:04:16.346 EAL: Detected lcore 9 as core 11 on socket 0 00:04:16.346 EAL: Detected lcore 10 as core 12 on socket 0 00:04:16.346 EAL: Detected lcore 11 as core 13 on socket 0 00:04:16.346 EAL: Detected lcore 12 as core 0 on socket 1 00:04:16.346 EAL: Detected lcore 13 as core 1 on socket 1 00:04:16.346 EAL: Detected lcore 14 as core 2 on socket 1 00:04:16.346 EAL: Detected lcore 15 as core 3 on socket 1 00:04:16.346 EAL: Detected lcore 16 as core 4 on socket 1 00:04:16.346 EAL: Detected lcore 17 as core 5 on socket 1 00:04:16.346 EAL: Detected lcore 18 as core 8 on socket 1 00:04:16.346 EAL: Detected lcore 19 as core 9 on socket 1 00:04:16.346 EAL: Detected lcore 20 as core 10 on socket 1 00:04:16.346 EAL: Detected lcore 21 as core 11 on socket 1 00:04:16.346 EAL: Detected lcore 22 as core 12 on socket 1 00:04:16.346 EAL: Detected lcore 23 as core 13 on socket 1 00:04:16.346 EAL: Detected lcore 24 as core 0 on socket 0 00:04:16.346 EAL: Detected lcore 25 as core 1 on socket 0 00:04:16.346 EAL: Detected lcore 26 as core 2 on socket 0 00:04:16.346 EAL: Detected lcore 27 as core 3 on socket 0 00:04:16.346 EAL: Detected lcore 28 as core 4 on socket 0 00:04:16.346 EAL: Detected lcore 29 as core 5 on socket 0 00:04:16.346 EAL: Detected lcore 30 as core 8 on socket 0 00:04:16.346 EAL: Detected lcore 31 as core 9 on socket 0 00:04:16.346 EAL: Detected lcore 32 as core 10 on socket 0 00:04:16.346 EAL: Detected lcore 33 as core 11 on socket 0 00:04:16.346 EAL: Detected lcore 34 as core 12 on socket 0 00:04:16.346 EAL: Detected lcore 35 as core 13 on socket 0 00:04:16.346 EAL: Detected lcore 36 as core 0 on socket 1 00:04:16.346 EAL: Detected lcore 37 as core 1 on socket 1 00:04:16.346 EAL: Detected lcore 38 as core 2 on socket 1 00:04:16.346 EAL: Detected lcore 39 as core 3 on socket 1 00:04:16.346 EAL: Detected lcore 40 as core 4 on socket 1 00:04:16.346 EAL: Detected lcore 41 as core 5 on socket 1 00:04:16.346 EAL: Detected lcore 42 as core 8 on socket 1 00:04:16.346 EAL: Detected lcore 43 as core 9 on socket 1 00:04:16.346 EAL: Detected lcore 44 as core 10 on socket 1 00:04:16.346 EAL: Detected lcore 45 as core 11 on socket 1 00:04:16.346 EAL: Detected lcore 46 as core 12 on socket 1 00:04:16.346 EAL: Detected lcore 47 as core 13 on socket 1 00:04:16.346 EAL: Maximum logical cores by configuration: 128 00:04:16.346 EAL: Detected CPU lcores: 48 00:04:16.346 EAL: Detected NUMA nodes: 2 00:04:16.346 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:16.346 EAL: Detected shared linkage of DPDK 00:04:16.346 EAL: No shared files mode enabled, IPC will be disabled 00:04:16.346 EAL: Bus pci wants IOVA as 'DC' 00:04:16.346 EAL: Buses did not request a specific IOVA mode. 00:04:16.346 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:16.346 EAL: Selected IOVA mode 'VA' 00:04:16.346 EAL: Probing VFIO support... 00:04:16.346 EAL: IOMMU type 1 (Type 1) is supported 00:04:16.346 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:16.346 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:16.346 EAL: VFIO support initialized 00:04:16.346 EAL: Ask a virtual area of 0x2e000 bytes 00:04:16.346 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:16.346 EAL: Setting up physically contiguous memory... 00:04:16.346 EAL: Setting maximum number of open files to 524288 00:04:16.346 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:16.346 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:16.346 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:16.346 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.346 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:16.346 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.346 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.346 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:16.346 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:16.346 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.346 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:16.346 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.346 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.346 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:16.346 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:16.346 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.346 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:16.346 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.346 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.346 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:16.346 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:16.346 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.346 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:16.346 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:16.346 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.346 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:16.346 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:16.346 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:16.346 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.346 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:16.346 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.346 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.346 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:16.346 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:16.346 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.346 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:16.346 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.346 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.346 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:16.347 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:16.347 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.347 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:16.347 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.347 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.347 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:16.347 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:16.347 EAL: Ask a virtual area of 0x61000 bytes 00:04:16.347 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:16.347 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:16.347 EAL: Ask a virtual area of 0x400000000 bytes 00:04:16.347 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:16.347 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:16.347 EAL: Hugepages will be freed exactly as allocated. 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: TSC frequency is ~2700000 KHz 00:04:16.347 EAL: Main lcore 0 is ready (tid=7f945ae31a00;cpuset=[0]) 00:04:16.347 EAL: Trying to obtain current memory policy. 00:04:16.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.347 EAL: Restoring previous memory policy: 0 00:04:16.347 EAL: request: mp_malloc_sync 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: Heap on socket 0 was expanded by 2MB 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:16.347 EAL: Mem event callback 'spdk:(nil)' registered 00:04:16.347 00:04:16.347 00:04:16.347 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.347 http://cunit.sourceforge.net/ 00:04:16.347 00:04:16.347 00:04:16.347 Suite: components_suite 00:04:16.347 Test: vtophys_malloc_test ...passed 00:04:16.347 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:16.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.347 EAL: Restoring previous memory policy: 4 00:04:16.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.347 EAL: request: mp_malloc_sync 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: Heap on socket 0 was expanded by 4MB 00:04:16.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.347 EAL: request: mp_malloc_sync 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: Heap on socket 0 was shrunk by 4MB 00:04:16.347 EAL: Trying to obtain current memory policy. 00:04:16.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.347 EAL: Restoring previous memory policy: 4 00:04:16.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.347 EAL: request: mp_malloc_sync 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: Heap on socket 0 was expanded by 6MB 00:04:16.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.347 EAL: request: mp_malloc_sync 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: Heap on socket 0 was shrunk by 6MB 00:04:16.347 EAL: Trying to obtain current memory policy. 00:04:16.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.347 EAL: Restoring previous memory policy: 4 00:04:16.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.347 EAL: request: mp_malloc_sync 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: Heap on socket 0 was expanded by 10MB 00:04:16.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.347 EAL: request: mp_malloc_sync 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: Heap on socket 0 was shrunk by 10MB 00:04:16.347 EAL: Trying to obtain current memory policy. 00:04:16.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.347 EAL: Restoring previous memory policy: 4 00:04:16.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.347 EAL: request: mp_malloc_sync 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: Heap on socket 0 was expanded by 18MB 00:04:16.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.347 EAL: request: mp_malloc_sync 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: Heap on socket 0 was shrunk by 18MB 00:04:16.347 EAL: Trying to obtain current memory policy. 00:04:16.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.347 EAL: Restoring previous memory policy: 4 00:04:16.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.347 EAL: request: mp_malloc_sync 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: Heap on socket 0 was expanded by 34MB 00:04:16.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.347 EAL: request: mp_malloc_sync 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: Heap on socket 0 was shrunk by 34MB 00:04:16.347 EAL: Trying to obtain current memory policy. 00:04:16.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.347 EAL: Restoring previous memory policy: 4 00:04:16.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.347 EAL: request: mp_malloc_sync 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: Heap on socket 0 was expanded by 66MB 00:04:16.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.347 EAL: request: mp_malloc_sync 00:04:16.347 EAL: No shared files mode enabled, IPC is disabled 00:04:16.347 EAL: Heap on socket 0 was shrunk by 66MB 00:04:16.347 EAL: Trying to obtain current memory policy. 00:04:16.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.607 EAL: Restoring previous memory policy: 4 00:04:16.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.607 EAL: request: mp_malloc_sync 00:04:16.607 EAL: No shared files mode enabled, IPC is disabled 00:04:16.607 EAL: Heap on socket 0 was expanded by 130MB 00:04:16.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.607 EAL: request: mp_malloc_sync 00:04:16.607 EAL: No shared files mode enabled, IPC is disabled 00:04:16.607 EAL: Heap on socket 0 was shrunk by 130MB 00:04:16.607 EAL: Trying to obtain current memory policy. 00:04:16.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.607 EAL: Restoring previous memory policy: 4 00:04:16.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.607 EAL: request: mp_malloc_sync 00:04:16.607 EAL: No shared files mode enabled, IPC is disabled 00:04:16.607 EAL: Heap on socket 0 was expanded by 258MB 00:04:16.865 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.865 EAL: request: mp_malloc_sync 00:04:16.865 EAL: No shared files mode enabled, IPC is disabled 00:04:16.865 EAL: Heap on socket 0 was shrunk by 258MB 00:04:16.865 EAL: Trying to obtain current memory policy. 00:04:16.865 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.124 EAL: Restoring previous memory policy: 4 00:04:17.124 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.124 EAL: request: mp_malloc_sync 00:04:17.124 EAL: No shared files mode enabled, IPC is disabled 00:04:17.124 EAL: Heap on socket 0 was expanded by 514MB 00:04:17.124 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.383 EAL: request: mp_malloc_sync 00:04:17.383 EAL: No shared files mode enabled, IPC is disabled 00:04:17.383 EAL: Heap on socket 0 was shrunk by 514MB 00:04:17.383 EAL: Trying to obtain current memory policy. 00:04:17.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.642 EAL: Restoring previous memory policy: 4 00:04:17.642 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.642 EAL: request: mp_malloc_sync 00:04:17.642 EAL: No shared files mode enabled, IPC is disabled 00:04:17.642 EAL: Heap on socket 0 was expanded by 1026MB 00:04:17.901 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.159 EAL: request: mp_malloc_sync 00:04:18.159 EAL: No shared files mode enabled, IPC is disabled 00:04:18.159 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:18.159 passed 00:04:18.159 00:04:18.159 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.159 suites 1 1 n/a 0 0 00:04:18.159 tests 2 2 2 0 0 00:04:18.159 asserts 497 497 497 0 n/a 00:04:18.159 00:04:18.159 Elapsed time = 1.780 seconds 00:04:18.159 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.159 EAL: request: mp_malloc_sync 00:04:18.159 EAL: No shared files mode enabled, IPC is disabled 00:04:18.159 EAL: Heap on socket 0 was shrunk by 2MB 00:04:18.159 EAL: No shared files mode enabled, IPC is disabled 00:04:18.159 EAL: No shared files mode enabled, IPC is disabled 00:04:18.159 EAL: No shared files mode enabled, IPC is disabled 00:04:18.159 00:04:18.159 real 0m1.999s 00:04:18.159 user 0m0.958s 00:04:18.159 sys 0m0.998s 00:04:18.159 18:14:46 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.159 18:14:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:18.159 ************************************ 00:04:18.159 END TEST env_vtophys 00:04:18.159 ************************************ 00:04:18.159 18:14:46 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:18.160 18:14:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.160 18:14:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.160 18:14:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.160 ************************************ 00:04:18.160 START TEST env_pci 00:04:18.160 ************************************ 00:04:18.160 18:14:46 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:18.419 00:04:18.419 00:04:18.419 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.419 http://cunit.sourceforge.net/ 00:04:18.419 00:04:18.419 00:04:18.419 Suite: pci 00:04:18.419 Test: pci_hook ...[2024-10-08 18:14:46.706724] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1057575 has claimed it 00:04:18.419 EAL: Cannot find device (10000:00:01.0) 00:04:18.419 EAL: Failed to attach device on primary process 00:04:18.419 passed 00:04:18.419 00:04:18.419 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.419 suites 1 1 n/a 0 0 00:04:18.419 tests 1 1 1 0 0 00:04:18.419 asserts 25 25 25 0 n/a 00:04:18.419 00:04:18.419 Elapsed time = 0.022 seconds 00:04:18.419 00:04:18.419 real 0m0.037s 00:04:18.419 user 0m0.009s 00:04:18.419 sys 0m0.027s 00:04:18.419 18:14:46 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.419 18:14:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:18.419 ************************************ 00:04:18.419 END TEST env_pci 00:04:18.419 ************************************ 00:04:18.419 18:14:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:18.419 18:14:46 env -- env/env.sh@15 -- # uname 00:04:18.419 18:14:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:18.419 18:14:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:18.419 18:14:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:18.419 18:14:46 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:18.419 18:14:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.419 18:14:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.419 ************************************ 00:04:18.419 START TEST env_dpdk_post_init 00:04:18.419 ************************************ 00:04:18.419 18:14:46 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:18.419 EAL: Detected CPU lcores: 48 00:04:18.419 EAL: Detected NUMA nodes: 2 00:04:18.419 EAL: Detected shared linkage of DPDK 00:04:18.419 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:18.419 EAL: Selected IOVA mode 'VA' 00:04:18.419 EAL: VFIO support initialized 00:04:18.419 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:18.679 EAL: Using IOMMU type 1 (Type 1) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:18.679 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:19.619 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:04:22.913 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:04:22.913 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:04:22.913 Starting DPDK initialization... 00:04:22.913 Starting SPDK post initialization... 00:04:22.913 SPDK NVMe probe 00:04:22.913 Attaching to 0000:82:00.0 00:04:22.913 Attached to 0000:82:00.0 00:04:22.913 Cleaning up... 00:04:22.913 00:04:22.913 real 0m4.517s 00:04:22.913 user 0m3.071s 00:04:22.913 sys 0m0.494s 00:04:22.913 18:14:51 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.913 18:14:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.913 ************************************ 00:04:22.913 END TEST env_dpdk_post_init 00:04:22.913 ************************************ 00:04:22.913 18:14:51 env -- env/env.sh@26 -- # uname 00:04:22.913 18:14:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:22.913 18:14:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:22.913 18:14:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.913 18:14:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.913 18:14:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.913 ************************************ 00:04:22.913 START TEST env_mem_callbacks 00:04:22.913 ************************************ 00:04:22.913 18:14:51 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:22.913 EAL: Detected CPU lcores: 48 00:04:22.913 EAL: Detected NUMA nodes: 2 00:04:22.913 EAL: Detected shared linkage of DPDK 00:04:22.913 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.913 EAL: Selected IOVA mode 'VA' 00:04:22.913 EAL: VFIO support initialized 00:04:22.913 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.913 00:04:22.913 00:04:22.913 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.913 http://cunit.sourceforge.net/ 00:04:22.913 00:04:22.913 00:04:22.913 Suite: memory 00:04:22.913 Test: test ... 00:04:22.913 register 0x200000200000 2097152 00:04:22.913 malloc 3145728 00:04:22.913 register 0x200000400000 4194304 00:04:22.913 buf 0x200000500000 len 3145728 PASSED 00:04:22.913 malloc 64 00:04:22.913 buf 0x2000004fff40 len 64 PASSED 00:04:22.913 malloc 4194304 00:04:22.913 register 0x200000800000 6291456 00:04:22.913 buf 0x200000a00000 len 4194304 PASSED 00:04:22.913 free 0x200000500000 3145728 00:04:22.913 free 0x2000004fff40 64 00:04:22.913 unregister 0x200000400000 4194304 PASSED 00:04:22.913 free 0x200000a00000 4194304 00:04:22.913 unregister 0x200000800000 6291456 PASSED 00:04:22.913 malloc 8388608 00:04:22.913 register 0x200000400000 10485760 00:04:22.913 buf 0x200000600000 len 8388608 PASSED 00:04:22.913 free 0x200000600000 8388608 00:04:22.913 unregister 0x200000400000 10485760 PASSED 00:04:22.913 passed 00:04:22.913 00:04:22.913 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.913 suites 1 1 n/a 0 0 00:04:22.913 tests 1 1 1 0 0 00:04:22.913 asserts 15 15 15 0 n/a 00:04:22.913 00:04:22.913 Elapsed time = 0.009 seconds 00:04:22.913 00:04:22.913 real 0m0.056s 00:04:22.913 user 0m0.014s 00:04:22.913 sys 0m0.042s 00:04:22.913 18:14:51 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.913 18:14:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:22.913 ************************************ 00:04:22.913 END TEST env_mem_callbacks 00:04:22.913 ************************************ 00:04:23.219 00:04:23.219 real 0m7.633s 00:04:23.219 user 0m4.745s 00:04:23.219 sys 0m1.923s 00:04:23.219 18:14:51 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.219 18:14:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.219 ************************************ 00:04:23.219 END TEST env 00:04:23.219 ************************************ 00:04:23.219 18:14:51 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:23.219 18:14:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.219 18:14:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.219 18:14:51 -- common/autotest_common.sh@10 -- # set +x 00:04:23.219 ************************************ 00:04:23.219 START TEST rpc 00:04:23.219 ************************************ 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:23.219 * Looking for test storage... 00:04:23.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:23.219 18:14:51 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.219 18:14:51 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.219 18:14:51 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.219 18:14:51 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.219 18:14:51 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.219 18:14:51 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.219 18:14:51 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.219 18:14:51 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.219 18:14:51 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.219 18:14:51 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.219 18:14:51 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.219 18:14:51 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.219 18:14:51 rpc -- scripts/common.sh@345 -- # : 1 00:04:23.219 18:14:51 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.219 18:14:51 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.219 18:14:51 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.219 18:14:51 rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.219 18:14:51 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.219 18:14:51 rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.219 18:14:51 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.219 18:14:51 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.219 18:14:51 rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.219 18:14:51 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.219 18:14:51 rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.219 18:14:51 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.219 18:14:51 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.219 18:14:51 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.219 18:14:51 rpc -- scripts/common.sh@368 -- # return 0 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:23.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.219 --rc genhtml_branch_coverage=1 00:04:23.219 --rc genhtml_function_coverage=1 00:04:23.219 --rc genhtml_legend=1 00:04:23.219 --rc geninfo_all_blocks=1 00:04:23.219 --rc geninfo_unexecuted_blocks=1 00:04:23.219 00:04:23.219 ' 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:23.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.219 --rc genhtml_branch_coverage=1 00:04:23.219 --rc genhtml_function_coverage=1 00:04:23.219 --rc genhtml_legend=1 00:04:23.219 --rc geninfo_all_blocks=1 00:04:23.219 --rc geninfo_unexecuted_blocks=1 00:04:23.219 00:04:23.219 ' 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:23.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.219 --rc genhtml_branch_coverage=1 00:04:23.219 --rc genhtml_function_coverage=1 00:04:23.219 --rc genhtml_legend=1 00:04:23.219 --rc geninfo_all_blocks=1 00:04:23.219 --rc geninfo_unexecuted_blocks=1 00:04:23.219 00:04:23.219 ' 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:23.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.219 --rc genhtml_branch_coverage=1 00:04:23.219 --rc genhtml_function_coverage=1 00:04:23.219 --rc genhtml_legend=1 00:04:23.219 --rc geninfo_all_blocks=1 00:04:23.219 --rc geninfo_unexecuted_blocks=1 00:04:23.219 00:04:23.219 ' 00:04:23.219 18:14:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1058344 00:04:23.219 18:14:51 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:23.219 18:14:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.219 18:14:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1058344 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@831 -- # '[' -z 1058344 ']' 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.219 18:14:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.479 [2024-10-08 18:14:51.805245] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:04:23.479 [2024-10-08 18:14:51.805355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058344 ] 00:04:23.479 [2024-10-08 18:14:51.912139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.740 [2024-10-08 18:14:52.140151] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:23.740 [2024-10-08 18:14:52.140265] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1058344' to capture a snapshot of events at runtime. 00:04:23.740 [2024-10-08 18:14:52.140300] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:23.740 [2024-10-08 18:14:52.140330] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:23.740 [2024-10-08 18:14:52.140357] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1058344 for offline analysis/debug. 00:04:23.740 [2024-10-08 18:14:52.141197] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.310 18:14:52 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:24.310 18:14:52 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:24.310 18:14:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:24.310 18:14:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:24.310 18:14:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:24.310 18:14:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:24.310 18:14:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.310 18:14:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.310 18:14:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.310 ************************************ 00:04:24.310 START TEST rpc_integrity 00:04:24.310 ************************************ 00:04:24.310 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:24.310 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.310 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.310 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.310 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.310 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:24.310 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:24.310 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:24.310 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:24.310 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.310 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.310 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.310 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:24.310 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:24.310 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.310 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.310 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.310 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:24.310 { 00:04:24.310 "name": "Malloc0", 00:04:24.310 "aliases": [ 00:04:24.310 "e7666496-f22d-40ee-8c69-6fd77538e8ac" 00:04:24.310 ], 00:04:24.310 "product_name": "Malloc disk", 00:04:24.310 "block_size": 512, 00:04:24.310 "num_blocks": 16384, 00:04:24.310 "uuid": "e7666496-f22d-40ee-8c69-6fd77538e8ac", 00:04:24.310 "assigned_rate_limits": { 00:04:24.310 "rw_ios_per_sec": 0, 00:04:24.310 "rw_mbytes_per_sec": 0, 00:04:24.310 "r_mbytes_per_sec": 0, 00:04:24.310 "w_mbytes_per_sec": 0 00:04:24.310 }, 00:04:24.310 "claimed": false, 00:04:24.310 "zoned": false, 00:04:24.310 "supported_io_types": { 00:04:24.310 "read": true, 00:04:24.310 "write": true, 00:04:24.310 "unmap": true, 00:04:24.310 "flush": true, 00:04:24.310 "reset": true, 00:04:24.310 "nvme_admin": false, 00:04:24.310 "nvme_io": false, 00:04:24.310 "nvme_io_md": false, 00:04:24.310 "write_zeroes": true, 00:04:24.310 "zcopy": true, 00:04:24.310 "get_zone_info": false, 00:04:24.310 "zone_management": false, 00:04:24.310 "zone_append": false, 00:04:24.310 "compare": false, 00:04:24.310 "compare_and_write": false, 00:04:24.310 "abort": true, 00:04:24.310 "seek_hole": false, 00:04:24.310 "seek_data": false, 00:04:24.310 "copy": true, 00:04:24.310 "nvme_iov_md": false 00:04:24.310 }, 00:04:24.310 "memory_domains": [ 00:04:24.310 { 00:04:24.310 "dma_device_id": "system", 00:04:24.310 "dma_device_type": 1 00:04:24.310 }, 00:04:24.310 { 00:04:24.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.310 "dma_device_type": 2 00:04:24.310 } 00:04:24.310 ], 00:04:24.310 "driver_specific": {} 00:04:24.310 } 00:04:24.310 ]' 00:04:24.310 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:24.310 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:24.310 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:24.310 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.310 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.310 [2024-10-08 18:14:52.831932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:24.310 [2024-10-08 18:14:52.831995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.311 [2024-10-08 18:14:52.832025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18e8b00 00:04:24.311 [2024-10-08 18:14:52.832044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.311 [2024-10-08 18:14:52.834601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.311 [2024-10-08 18:14:52.834685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.311 Passthru0 00:04:24.311 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.311 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.311 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.311 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.569 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.569 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.569 { 00:04:24.569 "name": "Malloc0", 00:04:24.569 "aliases": [ 00:04:24.569 "e7666496-f22d-40ee-8c69-6fd77538e8ac" 00:04:24.569 ], 00:04:24.569 "product_name": "Malloc disk", 00:04:24.569 "block_size": 512, 00:04:24.569 "num_blocks": 16384, 00:04:24.569 "uuid": "e7666496-f22d-40ee-8c69-6fd77538e8ac", 00:04:24.569 "assigned_rate_limits": { 00:04:24.569 "rw_ios_per_sec": 0, 00:04:24.569 "rw_mbytes_per_sec": 0, 00:04:24.569 "r_mbytes_per_sec": 0, 00:04:24.569 "w_mbytes_per_sec": 0 00:04:24.569 }, 00:04:24.569 "claimed": true, 00:04:24.569 "claim_type": "exclusive_write", 00:04:24.569 "zoned": false, 00:04:24.569 "supported_io_types": { 00:04:24.569 "read": true, 00:04:24.569 "write": true, 00:04:24.569 "unmap": true, 00:04:24.569 "flush": true, 00:04:24.569 "reset": true, 00:04:24.569 "nvme_admin": false, 00:04:24.569 "nvme_io": false, 00:04:24.569 "nvme_io_md": false, 00:04:24.569 "write_zeroes": true, 00:04:24.569 "zcopy": true, 00:04:24.569 "get_zone_info": false, 00:04:24.569 "zone_management": false, 00:04:24.569 "zone_append": false, 00:04:24.569 "compare": false, 00:04:24.569 "compare_and_write": false, 00:04:24.569 "abort": true, 00:04:24.569 "seek_hole": false, 00:04:24.569 "seek_data": false, 00:04:24.569 "copy": true, 00:04:24.569 "nvme_iov_md": false 00:04:24.569 }, 00:04:24.569 "memory_domains": [ 00:04:24.569 { 00:04:24.569 "dma_device_id": "system", 00:04:24.569 "dma_device_type": 1 00:04:24.569 }, 00:04:24.569 { 00:04:24.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.569 "dma_device_type": 2 00:04:24.569 } 00:04:24.569 ], 00:04:24.569 "driver_specific": {} 00:04:24.569 }, 00:04:24.569 { 00:04:24.569 "name": "Passthru0", 00:04:24.569 "aliases": [ 00:04:24.569 "eb53cccc-6547-50f3-aad6-ab29e0a6e57c" 00:04:24.569 ], 00:04:24.569 "product_name": "passthru", 00:04:24.569 "block_size": 512, 00:04:24.569 "num_blocks": 16384, 00:04:24.569 "uuid": "eb53cccc-6547-50f3-aad6-ab29e0a6e57c", 00:04:24.569 "assigned_rate_limits": { 00:04:24.569 "rw_ios_per_sec": 0, 00:04:24.569 "rw_mbytes_per_sec": 0, 00:04:24.569 "r_mbytes_per_sec": 0, 00:04:24.569 "w_mbytes_per_sec": 0 00:04:24.569 }, 00:04:24.569 "claimed": false, 00:04:24.569 "zoned": false, 00:04:24.569 "supported_io_types": { 00:04:24.569 "read": true, 00:04:24.569 "write": true, 00:04:24.569 "unmap": true, 00:04:24.569 "flush": true, 00:04:24.569 "reset": true, 00:04:24.569 "nvme_admin": false, 00:04:24.569 "nvme_io": false, 00:04:24.569 "nvme_io_md": false, 00:04:24.569 "write_zeroes": true, 00:04:24.569 "zcopy": true, 00:04:24.569 "get_zone_info": false, 00:04:24.569 "zone_management": false, 00:04:24.569 "zone_append": false, 00:04:24.569 "compare": false, 00:04:24.569 "compare_and_write": false, 00:04:24.569 "abort": true, 00:04:24.569 "seek_hole": false, 00:04:24.569 "seek_data": false, 00:04:24.569 "copy": true, 00:04:24.569 "nvme_iov_md": false 00:04:24.569 }, 00:04:24.569 "memory_domains": [ 00:04:24.569 { 00:04:24.569 "dma_device_id": "system", 00:04:24.570 "dma_device_type": 1 00:04:24.570 }, 00:04:24.570 { 00:04:24.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.570 "dma_device_type": 2 00:04:24.570 } 00:04:24.570 ], 00:04:24.570 "driver_specific": { 00:04:24.570 "passthru": { 00:04:24.570 "name": "Passthru0", 00:04:24.570 "base_bdev_name": "Malloc0" 00:04:24.570 } 00:04:24.570 } 00:04:24.570 } 00:04:24.570 ]' 00:04:24.570 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:24.570 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.570 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.570 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.570 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.570 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.570 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:24.570 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.570 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.570 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.570 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.570 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.570 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.570 18:14:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.570 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.570 18:14:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.570 18:14:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.570 00:04:24.570 real 0m0.391s 00:04:24.570 user 0m0.280s 00:04:24.570 sys 0m0.038s 00:04:24.570 18:14:53 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.570 18:14:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.570 ************************************ 00:04:24.570 END TEST rpc_integrity 00:04:24.570 ************************************ 00:04:24.570 18:14:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:24.570 18:14:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.570 18:14:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.570 18:14:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.570 ************************************ 00:04:24.570 START TEST rpc_plugins 00:04:24.570 ************************************ 00:04:24.570 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:24.570 18:14:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:24.570 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.570 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.828 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.828 18:14:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:24.828 18:14:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:24.828 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.828 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.828 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.828 18:14:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:24.828 { 00:04:24.828 "name": "Malloc1", 00:04:24.828 "aliases": [ 00:04:24.828 "3c10a005-9285-40fc-8cee-2b0c7ac9cb30" 00:04:24.828 ], 00:04:24.828 "product_name": "Malloc disk", 00:04:24.828 "block_size": 4096, 00:04:24.828 "num_blocks": 256, 00:04:24.828 "uuid": "3c10a005-9285-40fc-8cee-2b0c7ac9cb30", 00:04:24.828 "assigned_rate_limits": { 00:04:24.828 "rw_ios_per_sec": 0, 00:04:24.828 "rw_mbytes_per_sec": 0, 00:04:24.828 "r_mbytes_per_sec": 0, 00:04:24.828 "w_mbytes_per_sec": 0 00:04:24.828 }, 00:04:24.828 "claimed": false, 00:04:24.828 "zoned": false, 00:04:24.828 "supported_io_types": { 00:04:24.828 "read": true, 00:04:24.828 "write": true, 00:04:24.828 "unmap": true, 00:04:24.828 "flush": true, 00:04:24.828 "reset": true, 00:04:24.828 "nvme_admin": false, 00:04:24.828 "nvme_io": false, 00:04:24.828 "nvme_io_md": false, 00:04:24.828 "write_zeroes": true, 00:04:24.828 "zcopy": true, 00:04:24.828 "get_zone_info": false, 00:04:24.828 "zone_management": false, 00:04:24.828 "zone_append": false, 00:04:24.828 "compare": false, 00:04:24.828 "compare_and_write": false, 00:04:24.828 "abort": true, 00:04:24.828 "seek_hole": false, 00:04:24.828 "seek_data": false, 00:04:24.828 "copy": true, 00:04:24.828 "nvme_iov_md": false 00:04:24.828 }, 00:04:24.828 "memory_domains": [ 00:04:24.828 { 00:04:24.828 "dma_device_id": "system", 00:04:24.828 "dma_device_type": 1 00:04:24.828 }, 00:04:24.828 { 00:04:24.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.828 "dma_device_type": 2 00:04:24.828 } 00:04:24.828 ], 00:04:24.828 "driver_specific": {} 00:04:24.828 } 00:04:24.828 ]' 00:04:24.828 18:14:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:24.828 18:14:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:24.828 18:14:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:24.828 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.828 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.828 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.828 18:14:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:24.828 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.828 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.828 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.829 18:14:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:24.829 18:14:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:24.829 18:14:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:24.829 00:04:24.829 real 0m0.213s 00:04:24.829 user 0m0.161s 00:04:24.829 sys 0m0.016s 00:04:24.829 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.829 18:14:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.829 ************************************ 00:04:24.829 END TEST rpc_plugins 00:04:24.829 ************************************ 00:04:24.829 18:14:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:24.829 18:14:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.829 18:14:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.829 18:14:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.088 ************************************ 00:04:25.088 START TEST rpc_trace_cmd_test 00:04:25.088 ************************************ 00:04:25.088 18:14:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:25.088 18:14:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:25.088 18:14:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:25.088 18:14:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.088 18:14:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:25.088 18:14:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.088 18:14:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:25.088 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1058344", 00:04:25.088 "tpoint_group_mask": "0x8", 00:04:25.088 "iscsi_conn": { 00:04:25.088 "mask": "0x2", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "scsi": { 00:04:25.088 "mask": "0x4", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "bdev": { 00:04:25.088 "mask": "0x8", 00:04:25.088 "tpoint_mask": "0xffffffffffffffff" 00:04:25.088 }, 00:04:25.088 "nvmf_rdma": { 00:04:25.088 "mask": "0x10", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "nvmf_tcp": { 00:04:25.088 "mask": "0x20", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "ftl": { 00:04:25.088 "mask": "0x40", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "blobfs": { 00:04:25.088 "mask": "0x80", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "dsa": { 00:04:25.088 "mask": "0x200", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "thread": { 00:04:25.088 "mask": "0x400", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "nvme_pcie": { 00:04:25.088 "mask": "0x800", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "iaa": { 00:04:25.088 "mask": "0x1000", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "nvme_tcp": { 00:04:25.088 "mask": "0x2000", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "bdev_nvme": { 00:04:25.088 "mask": "0x4000", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "sock": { 00:04:25.088 "mask": "0x8000", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "blob": { 00:04:25.088 "mask": "0x10000", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "bdev_raid": { 00:04:25.088 "mask": "0x20000", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 }, 00:04:25.088 "scheduler": { 00:04:25.088 "mask": "0x40000", 00:04:25.088 "tpoint_mask": "0x0" 00:04:25.088 } 00:04:25.088 }' 00:04:25.088 18:14:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:25.088 18:14:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:25.088 18:14:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:25.088 18:14:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:25.088 18:14:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:25.348 18:14:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:25.348 18:14:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:25.348 18:14:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:25.348 18:14:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:25.348 18:14:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:25.348 00:04:25.348 real 0m0.425s 00:04:25.348 user 0m0.386s 00:04:25.348 sys 0m0.026s 00:04:25.348 18:14:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.348 18:14:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:25.348 ************************************ 00:04:25.348 END TEST rpc_trace_cmd_test 00:04:25.348 ************************************ 00:04:25.348 18:14:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:25.348 18:14:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:25.348 18:14:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:25.348 18:14:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.348 18:14:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.348 18:14:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.348 ************************************ 00:04:25.348 START TEST rpc_daemon_integrity 00:04:25.348 ************************************ 00:04:25.348 18:14:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:25.348 18:14:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:25.348 18:14:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.348 18:14:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.348 18:14:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.348 18:14:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:25.348 18:14:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:25.607 18:14:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.607 18:14:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.607 18:14:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.607 18:14:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.607 18:14:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.607 18:14:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:25.607 18:14:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.607 18:14:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.607 18:14:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.607 18:14:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.607 18:14:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.607 { 00:04:25.607 "name": "Malloc2", 00:04:25.607 "aliases": [ 00:04:25.607 "e4ce5312-ed5b-4291-8c43-2bc6bfc91aec" 00:04:25.607 ], 00:04:25.607 "product_name": "Malloc disk", 00:04:25.607 "block_size": 512, 00:04:25.607 "num_blocks": 16384, 00:04:25.607 "uuid": "e4ce5312-ed5b-4291-8c43-2bc6bfc91aec", 00:04:25.607 "assigned_rate_limits": { 00:04:25.607 "rw_ios_per_sec": 0, 00:04:25.607 "rw_mbytes_per_sec": 0, 00:04:25.607 "r_mbytes_per_sec": 0, 00:04:25.607 "w_mbytes_per_sec": 0 00:04:25.607 }, 00:04:25.607 "claimed": false, 00:04:25.607 "zoned": false, 00:04:25.608 "supported_io_types": { 00:04:25.608 "read": true, 00:04:25.608 "write": true, 00:04:25.608 "unmap": true, 00:04:25.608 "flush": true, 00:04:25.608 "reset": true, 00:04:25.608 "nvme_admin": false, 00:04:25.608 "nvme_io": false, 00:04:25.608 "nvme_io_md": false, 00:04:25.608 "write_zeroes": true, 00:04:25.608 "zcopy": true, 00:04:25.608 "get_zone_info": false, 00:04:25.608 "zone_management": false, 00:04:25.608 "zone_append": false, 00:04:25.608 "compare": false, 00:04:25.608 "compare_and_write": false, 00:04:25.608 "abort": true, 00:04:25.608 "seek_hole": false, 00:04:25.608 "seek_data": false, 00:04:25.608 "copy": true, 00:04:25.608 "nvme_iov_md": false 00:04:25.608 }, 00:04:25.608 "memory_domains": [ 00:04:25.608 { 00:04:25.608 "dma_device_id": "system", 00:04:25.608 "dma_device_type": 1 00:04:25.608 }, 00:04:25.608 { 00:04:25.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.608 "dma_device_type": 2 00:04:25.608 } 00:04:25.608 ], 00:04:25.608 "driver_specific": {} 00:04:25.608 } 00:04:25.608 ]' 00:04:25.608 18:14:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:25.608 18:14:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:25.608 18:14:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:25.608 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.608 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.608 [2024-10-08 18:14:54.065402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:25.608 [2024-10-08 18:14:54.065519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:25.608 [2024-10-08 18:14:54.065571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18e8d30 00:04:25.608 [2024-10-08 18:14:54.065604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:25.608 [2024-10-08 18:14:54.068299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:25.608 [2024-10-08 18:14:54.068364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:25.608 Passthru0 00:04:25.608 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.608 18:14:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:25.608 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.608 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.608 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.608 18:14:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:25.608 { 00:04:25.608 "name": "Malloc2", 00:04:25.608 "aliases": [ 00:04:25.608 "e4ce5312-ed5b-4291-8c43-2bc6bfc91aec" 00:04:25.608 ], 00:04:25.608 "product_name": "Malloc disk", 00:04:25.608 "block_size": 512, 00:04:25.608 "num_blocks": 16384, 00:04:25.608 "uuid": "e4ce5312-ed5b-4291-8c43-2bc6bfc91aec", 00:04:25.608 "assigned_rate_limits": { 00:04:25.608 "rw_ios_per_sec": 0, 00:04:25.608 "rw_mbytes_per_sec": 0, 00:04:25.608 "r_mbytes_per_sec": 0, 00:04:25.608 "w_mbytes_per_sec": 0 00:04:25.608 }, 00:04:25.608 "claimed": true, 00:04:25.608 "claim_type": "exclusive_write", 00:04:25.608 "zoned": false, 00:04:25.608 "supported_io_types": { 00:04:25.608 "read": true, 00:04:25.608 "write": true, 00:04:25.608 "unmap": true, 00:04:25.608 "flush": true, 00:04:25.608 "reset": true, 00:04:25.608 "nvme_admin": false, 00:04:25.608 "nvme_io": false, 00:04:25.608 "nvme_io_md": false, 00:04:25.608 "write_zeroes": true, 00:04:25.608 "zcopy": true, 00:04:25.608 "get_zone_info": false, 00:04:25.608 "zone_management": false, 00:04:25.608 "zone_append": false, 00:04:25.608 "compare": false, 00:04:25.608 "compare_and_write": false, 00:04:25.608 "abort": true, 00:04:25.608 "seek_hole": false, 00:04:25.608 "seek_data": false, 00:04:25.608 "copy": true, 00:04:25.608 "nvme_iov_md": false 00:04:25.608 }, 00:04:25.608 "memory_domains": [ 00:04:25.608 { 00:04:25.608 "dma_device_id": "system", 00:04:25.608 "dma_device_type": 1 00:04:25.608 }, 00:04:25.608 { 00:04:25.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.608 "dma_device_type": 2 00:04:25.608 } 00:04:25.608 ], 00:04:25.608 "driver_specific": {} 00:04:25.608 }, 00:04:25.608 { 00:04:25.608 "name": "Passthru0", 00:04:25.608 "aliases": [ 00:04:25.608 "1a7123ec-626a-58ef-b373-bb99d02477f5" 00:04:25.608 ], 00:04:25.608 "product_name": "passthru", 00:04:25.608 "block_size": 512, 00:04:25.608 "num_blocks": 16384, 00:04:25.608 "uuid": "1a7123ec-626a-58ef-b373-bb99d02477f5", 00:04:25.608 "assigned_rate_limits": { 00:04:25.608 "rw_ios_per_sec": 0, 00:04:25.608 "rw_mbytes_per_sec": 0, 00:04:25.608 "r_mbytes_per_sec": 0, 00:04:25.608 "w_mbytes_per_sec": 0 00:04:25.608 }, 00:04:25.608 "claimed": false, 00:04:25.608 "zoned": false, 00:04:25.608 "supported_io_types": { 00:04:25.608 "read": true, 00:04:25.608 "write": true, 00:04:25.608 "unmap": true, 00:04:25.608 "flush": true, 00:04:25.608 "reset": true, 00:04:25.608 "nvme_admin": false, 00:04:25.608 "nvme_io": false, 00:04:25.608 "nvme_io_md": false, 00:04:25.608 "write_zeroes": true, 00:04:25.608 "zcopy": true, 00:04:25.608 "get_zone_info": false, 00:04:25.608 "zone_management": false, 00:04:25.608 "zone_append": false, 00:04:25.608 "compare": false, 00:04:25.608 "compare_and_write": false, 00:04:25.608 "abort": true, 00:04:25.608 "seek_hole": false, 00:04:25.608 "seek_data": false, 00:04:25.608 "copy": true, 00:04:25.608 "nvme_iov_md": false 00:04:25.608 }, 00:04:25.608 "memory_domains": [ 00:04:25.608 { 00:04:25.608 "dma_device_id": "system", 00:04:25.608 "dma_device_type": 1 00:04:25.608 }, 00:04:25.608 { 00:04:25.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.608 "dma_device_type": 2 00:04:25.608 } 00:04:25.608 ], 00:04:25.608 "driver_specific": { 00:04:25.608 "passthru": { 00:04:25.608 "name": "Passthru0", 00:04:25.608 "base_bdev_name": "Malloc2" 00:04:25.608 } 00:04:25.608 } 00:04:25.608 } 00:04:25.608 ]' 00:04:25.608 18:14:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:25.868 00:04:25.868 real 0m0.426s 00:04:25.868 user 0m0.320s 00:04:25.868 sys 0m0.034s 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.868 18:14:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.868 ************************************ 00:04:25.868 END TEST rpc_daemon_integrity 00:04:25.868 ************************************ 00:04:25.868 18:14:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:25.868 18:14:54 rpc -- rpc/rpc.sh@84 -- # killprocess 1058344 00:04:25.868 18:14:54 rpc -- common/autotest_common.sh@950 -- # '[' -z 1058344 ']' 00:04:25.868 18:14:54 rpc -- common/autotest_common.sh@954 -- # kill -0 1058344 00:04:25.868 18:14:54 rpc -- common/autotest_common.sh@955 -- # uname 00:04:25.868 18:14:54 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:25.868 18:14:54 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1058344 00:04:25.868 18:14:54 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:25.868 18:14:54 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:25.868 18:14:54 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1058344' 00:04:25.868 killing process with pid 1058344 00:04:25.868 18:14:54 rpc -- common/autotest_common.sh@969 -- # kill 1058344 00:04:25.868 18:14:54 rpc -- common/autotest_common.sh@974 -- # wait 1058344 00:04:26.806 00:04:26.806 real 0m3.511s 00:04:26.806 user 0m4.710s 00:04:26.806 sys 0m0.987s 00:04:26.806 18:14:55 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.806 18:14:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.806 ************************************ 00:04:26.806 END TEST rpc 00:04:26.806 ************************************ 00:04:26.806 18:14:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:26.807 18:14:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.807 18:14:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.807 18:14:55 -- common/autotest_common.sh@10 -- # set +x 00:04:26.807 ************************************ 00:04:26.807 START TEST skip_rpc 00:04:26.807 ************************************ 00:04:26.807 18:14:55 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:26.807 * Looking for test storage... 00:04:26.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:26.807 18:14:55 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:26.807 18:14:55 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:26.807 18:14:55 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:26.807 18:14:55 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.807 18:14:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:26.807 18:14:55 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.807 18:14:55 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:26.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.807 --rc genhtml_branch_coverage=1 00:04:26.807 --rc genhtml_function_coverage=1 00:04:26.807 --rc genhtml_legend=1 00:04:26.807 --rc geninfo_all_blocks=1 00:04:26.807 --rc geninfo_unexecuted_blocks=1 00:04:26.807 00:04:26.807 ' 00:04:26.807 18:14:55 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:26.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.807 --rc genhtml_branch_coverage=1 00:04:26.807 --rc genhtml_function_coverage=1 00:04:26.807 --rc genhtml_legend=1 00:04:26.807 --rc geninfo_all_blocks=1 00:04:26.807 --rc geninfo_unexecuted_blocks=1 00:04:26.807 00:04:26.807 ' 00:04:26.807 18:14:55 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:26.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.807 --rc genhtml_branch_coverage=1 00:04:26.807 --rc genhtml_function_coverage=1 00:04:26.807 --rc genhtml_legend=1 00:04:26.807 --rc geninfo_all_blocks=1 00:04:26.807 --rc geninfo_unexecuted_blocks=1 00:04:26.807 00:04:26.807 ' 00:04:26.807 18:14:55 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:26.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.807 --rc genhtml_branch_coverage=1 00:04:26.807 --rc genhtml_function_coverage=1 00:04:26.807 --rc genhtml_legend=1 00:04:26.807 --rc geninfo_all_blocks=1 00:04:26.807 --rc geninfo_unexecuted_blocks=1 00:04:26.807 00:04:26.807 ' 00:04:26.807 18:14:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:26.807 18:14:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:26.807 18:14:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:26.807 18:14:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.807 18:14:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.807 18:14:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.807 ************************************ 00:04:26.807 START TEST skip_rpc 00:04:26.807 ************************************ 00:04:26.807 18:14:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:26.807 18:14:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1058933 00:04:26.807 18:14:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:26.807 18:14:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.807 18:14:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:27.065 [2024-10-08 18:14:55.390482] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:04:27.065 [2024-10-08 18:14:55.390585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058933 ] 00:04:27.065 [2024-10-08 18:14:55.506073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.325 [2024-10-08 18:14:55.728421] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1058933 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1058933 ']' 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1058933 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1058933 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1058933' 00:04:32.595 killing process with pid 1058933 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1058933 00:04:32.595 18:15:00 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1058933 00:04:32.595 00:04:32.595 real 0m5.748s 00:04:32.595 user 0m5.201s 00:04:32.595 sys 0m0.572s 00:04:32.595 18:15:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.595 18:15:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.595 ************************************ 00:04:32.595 END TEST skip_rpc 00:04:32.595 ************************************ 00:04:32.595 18:15:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:32.595 18:15:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.595 18:15:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.595 18:15:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.595 ************************************ 00:04:32.595 START TEST skip_rpc_with_json 00:04:32.595 ************************************ 00:04:32.595 18:15:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:32.595 18:15:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:32.595 18:15:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1059679 00:04:32.595 18:15:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.595 18:15:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.596 18:15:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1059679 00:04:32.596 18:15:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1059679 ']' 00:04:32.596 18:15:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.596 18:15:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:32.596 18:15:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.596 18:15:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:32.596 18:15:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.855 [2024-10-08 18:15:01.198970] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:04:32.855 [2024-10-08 18:15:01.199075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059679 ] 00:04:32.855 [2024-10-08 18:15:01.305309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.114 [2024-10-08 18:15:01.527241] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.682 [2024-10-08 18:15:02.008910] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:33.682 request: 00:04:33.682 { 00:04:33.682 "trtype": "tcp", 00:04:33.682 "method": "nvmf_get_transports", 00:04:33.682 "req_id": 1 00:04:33.682 } 00:04:33.682 Got JSON-RPC error response 00:04:33.682 response: 00:04:33.682 { 00:04:33.682 "code": -19, 00:04:33.682 "message": "No such device" 00:04:33.682 } 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.682 [2024-10-08 18:15:02.021206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.682 18:15:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:33.957 { 00:04:33.957 "subsystems": [ 00:04:33.957 { 00:04:33.957 "subsystem": "fsdev", 00:04:33.957 "config": [ 00:04:33.957 { 00:04:33.957 "method": "fsdev_set_opts", 00:04:33.957 "params": { 00:04:33.957 "fsdev_io_pool_size": 65535, 00:04:33.957 "fsdev_io_cache_size": 256 00:04:33.957 } 00:04:33.957 } 00:04:33.957 ] 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "vfio_user_target", 00:04:33.957 "config": null 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "keyring", 00:04:33.957 "config": [] 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "iobuf", 00:04:33.957 "config": [ 00:04:33.957 { 00:04:33.957 "method": "iobuf_set_options", 00:04:33.957 "params": { 00:04:33.957 "small_pool_count": 8192, 00:04:33.957 "large_pool_count": 1024, 00:04:33.957 "small_bufsize": 8192, 00:04:33.957 "large_bufsize": 135168 00:04:33.957 } 00:04:33.957 } 00:04:33.957 ] 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "sock", 00:04:33.957 "config": [ 00:04:33.957 { 00:04:33.957 "method": "sock_set_default_impl", 00:04:33.957 "params": { 00:04:33.957 "impl_name": "posix" 00:04:33.957 } 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "method": "sock_impl_set_options", 00:04:33.957 "params": { 00:04:33.957 "impl_name": "ssl", 00:04:33.957 "recv_buf_size": 4096, 00:04:33.957 "send_buf_size": 4096, 00:04:33.957 "enable_recv_pipe": true, 00:04:33.957 "enable_quickack": false, 00:04:33.957 "enable_placement_id": 0, 00:04:33.957 "enable_zerocopy_send_server": true, 00:04:33.957 "enable_zerocopy_send_client": false, 00:04:33.957 "zerocopy_threshold": 0, 00:04:33.957 "tls_version": 0, 00:04:33.957 "enable_ktls": false 00:04:33.957 } 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "method": "sock_impl_set_options", 00:04:33.957 "params": { 00:04:33.957 "impl_name": "posix", 00:04:33.957 "recv_buf_size": 2097152, 00:04:33.957 "send_buf_size": 2097152, 00:04:33.957 "enable_recv_pipe": true, 00:04:33.957 "enable_quickack": false, 00:04:33.957 "enable_placement_id": 0, 00:04:33.957 "enable_zerocopy_send_server": true, 00:04:33.957 "enable_zerocopy_send_client": false, 00:04:33.957 "zerocopy_threshold": 0, 00:04:33.957 "tls_version": 0, 00:04:33.957 "enable_ktls": false 00:04:33.957 } 00:04:33.957 } 00:04:33.957 ] 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "vmd", 00:04:33.957 "config": [] 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "accel", 00:04:33.957 "config": [ 00:04:33.957 { 00:04:33.957 "method": "accel_set_options", 00:04:33.957 "params": { 00:04:33.957 "small_cache_size": 128, 00:04:33.957 "large_cache_size": 16, 00:04:33.957 "task_count": 2048, 00:04:33.957 "sequence_count": 2048, 00:04:33.957 "buf_count": 2048 00:04:33.957 } 00:04:33.957 } 00:04:33.957 ] 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "bdev", 00:04:33.957 "config": [ 00:04:33.957 { 00:04:33.957 "method": "bdev_set_options", 00:04:33.957 "params": { 00:04:33.957 "bdev_io_pool_size": 65535, 00:04:33.957 "bdev_io_cache_size": 256, 00:04:33.957 "bdev_auto_examine": true, 00:04:33.957 "iobuf_small_cache_size": 128, 00:04:33.957 "iobuf_large_cache_size": 16 00:04:33.957 } 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "method": "bdev_raid_set_options", 00:04:33.957 "params": { 00:04:33.957 "process_window_size_kb": 1024, 00:04:33.957 "process_max_bandwidth_mb_sec": 0 00:04:33.957 } 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "method": "bdev_iscsi_set_options", 00:04:33.957 "params": { 00:04:33.957 "timeout_sec": 30 00:04:33.957 } 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "method": "bdev_nvme_set_options", 00:04:33.957 "params": { 00:04:33.957 "action_on_timeout": "none", 00:04:33.957 "timeout_us": 0, 00:04:33.957 "timeout_admin_us": 0, 00:04:33.957 "keep_alive_timeout_ms": 10000, 00:04:33.957 "arbitration_burst": 0, 00:04:33.957 "low_priority_weight": 0, 00:04:33.957 "medium_priority_weight": 0, 00:04:33.957 "high_priority_weight": 0, 00:04:33.957 "nvme_adminq_poll_period_us": 10000, 00:04:33.957 "nvme_ioq_poll_period_us": 0, 00:04:33.957 "io_queue_requests": 0, 00:04:33.957 "delay_cmd_submit": true, 00:04:33.957 "transport_retry_count": 4, 00:04:33.957 "bdev_retry_count": 3, 00:04:33.957 "transport_ack_timeout": 0, 00:04:33.957 "ctrlr_loss_timeout_sec": 0, 00:04:33.957 "reconnect_delay_sec": 0, 00:04:33.957 "fast_io_fail_timeout_sec": 0, 00:04:33.957 "disable_auto_failback": false, 00:04:33.957 "generate_uuids": false, 00:04:33.957 "transport_tos": 0, 00:04:33.957 "nvme_error_stat": false, 00:04:33.957 "rdma_srq_size": 0, 00:04:33.957 "io_path_stat": false, 00:04:33.957 "allow_accel_sequence": false, 00:04:33.957 "rdma_max_cq_size": 0, 00:04:33.957 "rdma_cm_event_timeout_ms": 0, 00:04:33.957 "dhchap_digests": [ 00:04:33.957 "sha256", 00:04:33.957 "sha384", 00:04:33.957 "sha512" 00:04:33.957 ], 00:04:33.957 "dhchap_dhgroups": [ 00:04:33.957 "null", 00:04:33.957 "ffdhe2048", 00:04:33.957 "ffdhe3072", 00:04:33.957 "ffdhe4096", 00:04:33.957 "ffdhe6144", 00:04:33.957 "ffdhe8192" 00:04:33.957 ] 00:04:33.957 } 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "method": "bdev_nvme_set_hotplug", 00:04:33.957 "params": { 00:04:33.957 "period_us": 100000, 00:04:33.957 "enable": false 00:04:33.957 } 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "method": "bdev_wait_for_examine" 00:04:33.957 } 00:04:33.957 ] 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "scsi", 00:04:33.957 "config": null 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "scheduler", 00:04:33.957 "config": [ 00:04:33.957 { 00:04:33.957 "method": "framework_set_scheduler", 00:04:33.957 "params": { 00:04:33.957 "name": "static" 00:04:33.957 } 00:04:33.957 } 00:04:33.957 ] 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "vhost_scsi", 00:04:33.957 "config": [] 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "vhost_blk", 00:04:33.957 "config": [] 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "ublk", 00:04:33.957 "config": [] 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "nbd", 00:04:33.957 "config": [] 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "subsystem": "nvmf", 00:04:33.957 "config": [ 00:04:33.957 { 00:04:33.957 "method": "nvmf_set_config", 00:04:33.957 "params": { 00:04:33.957 "discovery_filter": "match_any", 00:04:33.957 "admin_cmd_passthru": { 00:04:33.957 "identify_ctrlr": false 00:04:33.957 }, 00:04:33.957 "dhchap_digests": [ 00:04:33.957 "sha256", 00:04:33.957 "sha384", 00:04:33.957 "sha512" 00:04:33.957 ], 00:04:33.957 "dhchap_dhgroups": [ 00:04:33.957 "null", 00:04:33.957 "ffdhe2048", 00:04:33.957 "ffdhe3072", 00:04:33.957 "ffdhe4096", 00:04:33.957 "ffdhe6144", 00:04:33.957 "ffdhe8192" 00:04:33.957 ] 00:04:33.957 } 00:04:33.957 }, 00:04:33.957 { 00:04:33.957 "method": "nvmf_set_max_subsystems", 00:04:33.957 "params": { 00:04:33.957 "max_subsystems": 1024 00:04:33.957 } 00:04:33.957 }, 00:04:33.957 { 00:04:33.958 "method": "nvmf_set_crdt", 00:04:33.958 "params": { 00:04:33.958 "crdt1": 0, 00:04:33.958 "crdt2": 0, 00:04:33.958 "crdt3": 0 00:04:33.958 } 00:04:33.958 }, 00:04:33.958 { 00:04:33.958 "method": "nvmf_create_transport", 00:04:33.958 "params": { 00:04:33.958 "trtype": "TCP", 00:04:33.958 "max_queue_depth": 128, 00:04:33.958 "max_io_qpairs_per_ctrlr": 127, 00:04:33.958 "in_capsule_data_size": 4096, 00:04:33.958 "max_io_size": 131072, 00:04:33.958 "io_unit_size": 131072, 00:04:33.958 "max_aq_depth": 128, 00:04:33.958 "num_shared_buffers": 511, 00:04:33.958 "buf_cache_size": 4294967295, 00:04:33.958 "dif_insert_or_strip": false, 00:04:33.958 "zcopy": false, 00:04:33.958 "c2h_success": true, 00:04:33.958 "sock_priority": 0, 00:04:33.958 "abort_timeout_sec": 1, 00:04:33.958 "ack_timeout": 0, 00:04:33.958 "data_wr_pool_size": 0 00:04:33.958 } 00:04:33.958 } 00:04:33.958 ] 00:04:33.958 }, 00:04:33.958 { 00:04:33.958 "subsystem": "iscsi", 00:04:33.958 "config": [ 00:04:33.958 { 00:04:33.958 "method": "iscsi_set_options", 00:04:33.958 "params": { 00:04:33.958 "node_base": "iqn.2016-06.io.spdk", 00:04:33.958 "max_sessions": 128, 00:04:33.958 "max_connections_per_session": 2, 00:04:33.958 "max_queue_depth": 64, 00:04:33.958 "default_time2wait": 2, 00:04:33.958 "default_time2retain": 20, 00:04:33.958 "first_burst_length": 8192, 00:04:33.958 "immediate_data": true, 00:04:33.958 "allow_duplicated_isid": false, 00:04:33.958 "error_recovery_level": 0, 00:04:33.958 "nop_timeout": 60, 00:04:33.958 "nop_in_interval": 30, 00:04:33.958 "disable_chap": false, 00:04:33.958 "require_chap": false, 00:04:33.958 "mutual_chap": false, 00:04:33.958 "chap_group": 0, 00:04:33.958 "max_large_datain_per_connection": 64, 00:04:33.958 "max_r2t_per_connection": 4, 00:04:33.958 "pdu_pool_size": 36864, 00:04:33.958 "immediate_data_pool_size": 16384, 00:04:33.958 "data_out_pool_size": 2048 00:04:33.958 } 00:04:33.958 } 00:04:33.958 ] 00:04:33.958 } 00:04:33.958 ] 00:04:33.958 } 00:04:33.958 18:15:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:33.958 18:15:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1059679 00:04:33.958 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1059679 ']' 00:04:33.958 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1059679 00:04:33.958 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:33.958 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:33.958 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1059679 00:04:33.958 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:33.958 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:33.958 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1059679' 00:04:33.958 killing process with pid 1059679 00:04:33.958 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1059679 00:04:33.958 18:15:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1059679 00:04:34.573 18:15:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1059892 00:04:34.573 18:15:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:34.573 18:15:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:39.856 18:15:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1059892 00:04:39.856 18:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1059892 ']' 00:04:39.856 18:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1059892 00:04:39.856 18:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:39.856 18:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.856 18:15:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1059892 00:04:39.856 18:15:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.856 18:15:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.856 18:15:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1059892' 00:04:39.856 killing process with pid 1059892 00:04:39.856 18:15:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1059892 00:04:39.856 18:15:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1059892 00:04:40.115 18:15:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.115 18:15:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.115 00:04:40.115 real 0m7.510s 00:04:40.115 user 0m7.057s 00:04:40.115 sys 0m1.204s 00:04:40.115 18:15:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.115 18:15:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.115 ************************************ 00:04:40.115 END TEST skip_rpc_with_json 00:04:40.115 ************************************ 00:04:40.373 18:15:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:40.373 18:15:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.373 18:15:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.373 18:15:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.373 ************************************ 00:04:40.373 START TEST skip_rpc_with_delay 00:04:40.373 ************************************ 00:04:40.373 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:40.373 18:15:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.373 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:40.373 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.373 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.373 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.373 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.373 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.373 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.373 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.373 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.373 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:40.374 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.374 [2024-10-08 18:15:08.828900] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:40.374 [2024-10-08 18:15:08.829153] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:40.374 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:40.374 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:40.374 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:40.374 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:40.374 00:04:40.374 real 0m0.159s 00:04:40.374 user 0m0.112s 00:04:40.374 sys 0m0.045s 00:04:40.374 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.374 18:15:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:40.374 ************************************ 00:04:40.374 END TEST skip_rpc_with_delay 00:04:40.374 ************************************ 00:04:40.374 18:15:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:40.374 18:15:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:40.374 18:15:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:40.374 18:15:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.374 18:15:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.374 18:15:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.632 ************************************ 00:04:40.632 START TEST exit_on_failed_rpc_init 00:04:40.632 ************************************ 00:04:40.632 18:15:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:40.632 18:15:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1060928 00:04:40.632 18:15:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.632 18:15:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1060928 00:04:40.632 18:15:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1060928 ']' 00:04:40.632 18:15:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.632 18:15:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:40.632 18:15:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.632 18:15:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:40.632 18:15:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.632 [2024-10-08 18:15:08.998154] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:04:40.632 [2024-10-08 18:15:08.998239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060928 ] 00:04:40.632 [2024-10-08 18:15:09.103528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.892 [2024-10-08 18:15:09.292633] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:41.828 18:15:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.088 [2024-10-08 18:15:10.405105] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:04:42.088 [2024-10-08 18:15:10.405275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061249 ] 00:04:42.088 [2024-10-08 18:15:10.579060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.348 [2024-10-08 18:15:10.829205] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.348 [2024-10-08 18:15:10.829413] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:42.348 [2024-10-08 18:15:10.829462] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:42.348 [2024-10-08 18:15:10.829492] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1060928 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1060928 ']' 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1060928 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1060928 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1060928' 00:04:42.608 killing process with pid 1060928 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1060928 00:04:42.608 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1060928 00:04:43.546 00:04:43.546 real 0m2.901s 00:04:43.546 user 0m3.694s 00:04:43.546 sys 0m0.837s 00:04:43.546 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.546 18:15:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.546 ************************************ 00:04:43.546 END TEST exit_on_failed_rpc_init 00:04:43.546 ************************************ 00:04:43.546 18:15:11 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.546 00:04:43.546 real 0m16.774s 00:04:43.546 user 0m16.290s 00:04:43.546 sys 0m2.912s 00:04:43.546 18:15:11 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.546 18:15:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.546 ************************************ 00:04:43.546 END TEST skip_rpc 00:04:43.546 ************************************ 00:04:43.546 18:15:11 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:43.546 18:15:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.547 18:15:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.547 18:15:11 -- common/autotest_common.sh@10 -- # set +x 00:04:43.547 ************************************ 00:04:43.547 START TEST rpc_client 00:04:43.547 ************************************ 00:04:43.547 18:15:11 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:43.547 * Looking for test storage... 00:04:43.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:43.547 18:15:12 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:43.547 18:15:12 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:43.547 18:15:12 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:43.807 18:15:12 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.807 18:15:12 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.808 18:15:12 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.808 18:15:12 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:43.808 18:15:12 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.808 18:15:12 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:43.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.808 --rc genhtml_branch_coverage=1 00:04:43.808 --rc genhtml_function_coverage=1 00:04:43.808 --rc genhtml_legend=1 00:04:43.808 --rc geninfo_all_blocks=1 00:04:43.808 --rc geninfo_unexecuted_blocks=1 00:04:43.808 00:04:43.808 ' 00:04:43.808 18:15:12 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:43.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.808 --rc genhtml_branch_coverage=1 00:04:43.808 --rc genhtml_function_coverage=1 00:04:43.808 --rc genhtml_legend=1 00:04:43.808 --rc geninfo_all_blocks=1 00:04:43.808 --rc geninfo_unexecuted_blocks=1 00:04:43.808 00:04:43.808 ' 00:04:43.808 18:15:12 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:43.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.808 --rc genhtml_branch_coverage=1 00:04:43.808 --rc genhtml_function_coverage=1 00:04:43.808 --rc genhtml_legend=1 00:04:43.808 --rc geninfo_all_blocks=1 00:04:43.808 --rc geninfo_unexecuted_blocks=1 00:04:43.808 00:04:43.808 ' 00:04:43.808 18:15:12 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:43.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.808 --rc genhtml_branch_coverage=1 00:04:43.808 --rc genhtml_function_coverage=1 00:04:43.808 --rc genhtml_legend=1 00:04:43.808 --rc geninfo_all_blocks=1 00:04:43.808 --rc geninfo_unexecuted_blocks=1 00:04:43.808 00:04:43.808 ' 00:04:43.808 18:15:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:43.808 OK 00:04:43.808 18:15:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:43.808 00:04:43.808 real 0m0.301s 00:04:43.808 user 0m0.218s 00:04:43.808 sys 0m0.099s 00:04:43.808 18:15:12 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.808 18:15:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:43.808 ************************************ 00:04:43.808 END TEST rpc_client 00:04:43.808 ************************************ 00:04:43.808 18:15:12 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:43.808 18:15:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.808 18:15:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.808 18:15:12 -- common/autotest_common.sh@10 -- # set +x 00:04:43.808 ************************************ 00:04:43.808 START TEST json_config 00:04:43.808 ************************************ 00:04:43.808 18:15:12 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:44.067 18:15:12 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:44.067 18:15:12 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:44.067 18:15:12 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:44.067 18:15:12 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:44.067 18:15:12 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.067 18:15:12 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.067 18:15:12 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.067 18:15:12 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.067 18:15:12 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.067 18:15:12 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.067 18:15:12 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.067 18:15:12 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.067 18:15:12 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.067 18:15:12 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.067 18:15:12 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.067 18:15:12 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:44.067 18:15:12 json_config -- scripts/common.sh@345 -- # : 1 00:04:44.067 18:15:12 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.067 18:15:12 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.067 18:15:12 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:44.067 18:15:12 json_config -- scripts/common.sh@353 -- # local d=1 00:04:44.067 18:15:12 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.067 18:15:12 json_config -- scripts/common.sh@355 -- # echo 1 00:04:44.067 18:15:12 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.067 18:15:12 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:44.067 18:15:12 json_config -- scripts/common.sh@353 -- # local d=2 00:04:44.067 18:15:12 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.067 18:15:12 json_config -- scripts/common.sh@355 -- # echo 2 00:04:44.068 18:15:12 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.068 18:15:12 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.068 18:15:12 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.068 18:15:12 json_config -- scripts/common.sh@368 -- # return 0 00:04:44.068 18:15:12 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.068 18:15:12 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:44.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.068 --rc genhtml_branch_coverage=1 00:04:44.068 --rc genhtml_function_coverage=1 00:04:44.068 --rc genhtml_legend=1 00:04:44.068 --rc geninfo_all_blocks=1 00:04:44.068 --rc geninfo_unexecuted_blocks=1 00:04:44.068 00:04:44.068 ' 00:04:44.068 18:15:12 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:44.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.068 --rc genhtml_branch_coverage=1 00:04:44.068 --rc genhtml_function_coverage=1 00:04:44.068 --rc genhtml_legend=1 00:04:44.068 --rc geninfo_all_blocks=1 00:04:44.068 --rc geninfo_unexecuted_blocks=1 00:04:44.068 00:04:44.068 ' 00:04:44.068 18:15:12 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:44.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.068 --rc genhtml_branch_coverage=1 00:04:44.068 --rc genhtml_function_coverage=1 00:04:44.068 --rc genhtml_legend=1 00:04:44.068 --rc geninfo_all_blocks=1 00:04:44.068 --rc geninfo_unexecuted_blocks=1 00:04:44.068 00:04:44.068 ' 00:04:44.068 18:15:12 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:44.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.068 --rc genhtml_branch_coverage=1 00:04:44.068 --rc genhtml_function_coverage=1 00:04:44.068 --rc genhtml_legend=1 00:04:44.068 --rc geninfo_all_blocks=1 00:04:44.068 --rc geninfo_unexecuted_blocks=1 00:04:44.068 00:04:44.068 ' 00:04:44.068 18:15:12 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:44.068 18:15:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:44.068 18:15:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.068 18:15:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.068 18:15:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.068 18:15:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.068 18:15:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.068 18:15:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.068 18:15:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.068 18:15:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.068 18:15:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.068 18:15:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:44.329 18:15:12 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.329 18:15:12 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.329 18:15:12 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.329 18:15:12 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.329 18:15:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.329 18:15:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.329 18:15:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.329 18:15:12 json_config -- paths/export.sh@5 -- # export PATH 00:04:44.329 18:15:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@51 -- # : 0 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.329 18:15:12 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:44.330 INFO: JSON configuration test init 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:44.330 18:15:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.330 18:15:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:44.330 18:15:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.330 18:15:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.330 18:15:12 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:44.330 18:15:12 json_config -- json_config/common.sh@9 -- # local app=target 00:04:44.330 18:15:12 json_config -- json_config/common.sh@10 -- # shift 00:04:44.330 18:15:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.330 18:15:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.330 18:15:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.330 18:15:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.330 18:15:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.330 18:15:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1061659 00:04:44.330 18:15:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.330 Waiting for target to run... 00:04:44.330 18:15:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:44.330 18:15:12 json_config -- json_config/common.sh@25 -- # waitforlisten 1061659 /var/tmp/spdk_tgt.sock 00:04:44.330 18:15:12 json_config -- common/autotest_common.sh@831 -- # '[' -z 1061659 ']' 00:04:44.330 18:15:12 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.330 18:15:12 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.330 18:15:12 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.330 18:15:12 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.330 18:15:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.330 [2024-10-08 18:15:12.758631] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:04:44.330 [2024-10-08 18:15:12.758780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061659 ] 00:04:44.901 [2024-10-08 18:15:13.406217] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.159 [2024-10-08 18:15:13.579773] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.096 18:15:14 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.096 18:15:14 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:46.096 18:15:14 json_config -- json_config/common.sh@26 -- # echo '' 00:04:46.096 00:04:46.096 18:15:14 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:46.096 18:15:14 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:46.096 18:15:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:46.096 18:15:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.096 18:15:14 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:46.096 18:15:14 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:46.096 18:15:14 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.096 18:15:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.096 18:15:14 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:46.096 18:15:14 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:46.096 18:15:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:49.391 18:15:17 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:49.391 18:15:17 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:49.391 18:15:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:49.391 18:15:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.391 18:15:17 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:49.391 18:15:17 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:49.391 18:15:17 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:49.391 18:15:17 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:49.391 18:15:17 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:49.391 18:15:17 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:49.391 18:15:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:49.391 18:15:17 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:49.649 18:15:18 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:49.649 18:15:18 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:49.649 18:15:18 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:49.649 18:15:18 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:49.649 18:15:18 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:49.649 18:15:18 json_config -- json_config/json_config.sh@54 -- # sort 00:04:49.649 18:15:18 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:49.649 18:15:18 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:49.649 18:15:18 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:49.649 18:15:18 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:49.649 18:15:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:49.649 18:15:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.912 18:15:18 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:49.912 18:15:18 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:49.912 18:15:18 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:49.912 18:15:18 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:49.912 18:15:18 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:49.912 18:15:18 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:49.912 18:15:18 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:49.912 18:15:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:49.912 18:15:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.912 18:15:18 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:49.912 18:15:18 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:49.912 18:15:18 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:49.912 18:15:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:49.912 18:15:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:50.170 MallocForNvmf0 00:04:50.170 18:15:18 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:50.170 18:15:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:50.428 MallocForNvmf1 00:04:50.428 18:15:18 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:50.428 18:15:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:50.688 [2024-10-08 18:15:19.174292] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:50.688 18:15:19 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:50.688 18:15:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:51.627 18:15:19 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:51.627 18:15:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:51.893 18:15:20 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:51.893 18:15:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:52.464 18:15:20 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:52.464 18:15:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:52.722 [2024-10-08 18:15:21.133684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:52.722 18:15:21 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:52.722 18:15:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.722 18:15:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.722 18:15:21 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:52.722 18:15:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.722 18:15:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.722 18:15:21 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:52.722 18:15:21 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:52.722 18:15:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:53.291 MallocBdevForConfigChangeCheck 00:04:53.550 18:15:21 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:53.550 18:15:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:53.550 18:15:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.550 18:15:21 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:53.550 18:15:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.118 18:15:22 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:54.118 INFO: shutting down applications... 00:04:54.118 18:15:22 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:54.118 18:15:22 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:54.118 18:15:22 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:54.118 18:15:22 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:56.068 Calling clear_iscsi_subsystem 00:04:56.068 Calling clear_nvmf_subsystem 00:04:56.068 Calling clear_nbd_subsystem 00:04:56.068 Calling clear_ublk_subsystem 00:04:56.068 Calling clear_vhost_blk_subsystem 00:04:56.068 Calling clear_vhost_scsi_subsystem 00:04:56.068 Calling clear_bdev_subsystem 00:04:56.068 18:15:24 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:56.068 18:15:24 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:56.069 18:15:24 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:56.069 18:15:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.069 18:15:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:56.069 18:15:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:56.328 18:15:24 json_config -- json_config/json_config.sh@352 -- # break 00:04:56.328 18:15:24 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:56.328 18:15:24 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:56.328 18:15:24 json_config -- json_config/common.sh@31 -- # local app=target 00:04:56.328 18:15:24 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.328 18:15:24 json_config -- json_config/common.sh@35 -- # [[ -n 1061659 ]] 00:04:56.328 18:15:24 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1061659 00:04:56.328 18:15:24 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.328 18:15:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.328 18:15:24 json_config -- json_config/common.sh@41 -- # kill -0 1061659 00:04:56.328 18:15:24 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.897 18:15:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.897 18:15:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.897 18:15:25 json_config -- json_config/common.sh@41 -- # kill -0 1061659 00:04:56.897 18:15:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:56.897 18:15:25 json_config -- json_config/common.sh@43 -- # break 00:04:56.897 18:15:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:56.897 18:15:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:56.897 SPDK target shutdown done 00:04:56.897 18:15:25 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:56.897 INFO: relaunching applications... 00:04:56.897 18:15:25 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.897 18:15:25 json_config -- json_config/common.sh@9 -- # local app=target 00:04:56.897 18:15:25 json_config -- json_config/common.sh@10 -- # shift 00:04:56.897 18:15:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.897 18:15:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.897 18:15:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.897 18:15:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.897 18:15:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.897 18:15:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1063243 00:04:56.897 18:15:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.897 18:15:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.897 Waiting for target to run... 00:04:56.897 18:15:25 json_config -- json_config/common.sh@25 -- # waitforlisten 1063243 /var/tmp/spdk_tgt.sock 00:04:56.897 18:15:25 json_config -- common/autotest_common.sh@831 -- # '[' -z 1063243 ']' 00:04:56.897 18:15:25 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.897 18:15:25 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.897 18:15:25 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.897 18:15:25 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.897 18:15:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.897 [2024-10-08 18:15:25.429045] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:04:56.897 [2024-10-08 18:15:25.429165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063243 ] 00:04:57.833 [2024-10-08 18:15:26.085416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.833 [2024-10-08 18:15:26.269540] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.128 [2024-10-08 18:15:29.434797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.128 [2024-10-08 18:15:29.467557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:01.128 18:15:29 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.128 18:15:29 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:01.128 18:15:29 json_config -- json_config/common.sh@26 -- # echo '' 00:05:01.128 00:05:01.128 18:15:29 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:01.128 18:15:29 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:01.128 INFO: Checking if target configuration is the same... 00:05:01.128 18:15:29 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.128 18:15:29 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:01.128 18:15:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.128 + '[' 2 -ne 2 ']' 00:05:01.128 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:01.128 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:01.128 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:01.128 +++ basename /dev/fd/62 00:05:01.128 ++ mktemp /tmp/62.XXX 00:05:01.128 + tmp_file_1=/tmp/62.J8v 00:05:01.128 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.128 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:01.128 + tmp_file_2=/tmp/spdk_tgt_config.json.8FT 00:05:01.128 + ret=0 00:05:01.128 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:01.698 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:01.698 + diff -u /tmp/62.J8v /tmp/spdk_tgt_config.json.8FT 00:05:01.698 + echo 'INFO: JSON config files are the same' 00:05:01.698 INFO: JSON config files are the same 00:05:01.698 + rm /tmp/62.J8v /tmp/spdk_tgt_config.json.8FT 00:05:01.698 + exit 0 00:05:01.698 18:15:30 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:01.698 18:15:30 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:01.698 INFO: changing configuration and checking if this can be detected... 00:05:01.698 18:15:30 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:01.698 18:15:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:02.265 18:15:30 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.265 18:15:30 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:02.265 18:15:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.265 + '[' 2 -ne 2 ']' 00:05:02.265 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:02.265 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:02.265 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:02.265 +++ basename /dev/fd/62 00:05:02.265 ++ mktemp /tmp/62.XXX 00:05:02.265 + tmp_file_1=/tmp/62.VaP 00:05:02.265 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.265 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:02.265 + tmp_file_2=/tmp/spdk_tgt_config.json.Q2j 00:05:02.265 + ret=0 00:05:02.265 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:03.205 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:03.205 + diff -u /tmp/62.VaP /tmp/spdk_tgt_config.json.Q2j 00:05:03.205 + ret=1 00:05:03.205 + echo '=== Start of file: /tmp/62.VaP ===' 00:05:03.205 + cat /tmp/62.VaP 00:05:03.205 + echo '=== End of file: /tmp/62.VaP ===' 00:05:03.205 + echo '' 00:05:03.205 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Q2j ===' 00:05:03.205 + cat /tmp/spdk_tgt_config.json.Q2j 00:05:03.205 + echo '=== End of file: /tmp/spdk_tgt_config.json.Q2j ===' 00:05:03.205 + echo '' 00:05:03.205 + rm /tmp/62.VaP /tmp/spdk_tgt_config.json.Q2j 00:05:03.205 + exit 1 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:03.205 INFO: configuration change detected. 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@324 -- # [[ -n 1063243 ]] 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.205 18:15:31 json_config -- json_config/json_config.sh@330 -- # killprocess 1063243 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@950 -- # '[' -z 1063243 ']' 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@954 -- # kill -0 1063243 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@955 -- # uname 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1063243 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1063243' 00:05:03.205 killing process with pid 1063243 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@969 -- # kill 1063243 00:05:03.205 18:15:31 json_config -- common/autotest_common.sh@974 -- # wait 1063243 00:05:05.110 18:15:33 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.110 18:15:33 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:05.110 18:15:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.110 18:15:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.110 18:15:33 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:05.110 18:15:33 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:05.110 INFO: Success 00:05:05.110 00:05:05.110 real 0m21.184s 00:05:05.110 user 0m26.161s 00:05:05.110 sys 0m3.820s 00:05:05.110 18:15:33 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.110 18:15:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.110 ************************************ 00:05:05.110 END TEST json_config 00:05:05.110 ************************************ 00:05:05.110 18:15:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:05.110 18:15:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.110 18:15:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.110 18:15:33 -- common/autotest_common.sh@10 -- # set +x 00:05:05.110 ************************************ 00:05:05.110 START TEST json_config_extra_key 00:05:05.110 ************************************ 00:05:05.110 18:15:33 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:05.110 18:15:33 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:05.110 18:15:33 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:05.110 18:15:33 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:05.373 18:15:33 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:05.373 18:15:33 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.373 18:15:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:05.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.373 --rc genhtml_branch_coverage=1 00:05:05.373 --rc genhtml_function_coverage=1 00:05:05.373 --rc genhtml_legend=1 00:05:05.373 --rc geninfo_all_blocks=1 00:05:05.373 --rc geninfo_unexecuted_blocks=1 00:05:05.373 00:05:05.373 ' 00:05:05.373 18:15:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:05.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.373 --rc genhtml_branch_coverage=1 00:05:05.373 --rc genhtml_function_coverage=1 00:05:05.373 --rc genhtml_legend=1 00:05:05.373 --rc geninfo_all_blocks=1 00:05:05.373 --rc geninfo_unexecuted_blocks=1 00:05:05.373 00:05:05.373 ' 00:05:05.373 18:15:33 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:05.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.373 --rc genhtml_branch_coverage=1 00:05:05.373 --rc genhtml_function_coverage=1 00:05:05.373 --rc genhtml_legend=1 00:05:05.373 --rc geninfo_all_blocks=1 00:05:05.373 --rc geninfo_unexecuted_blocks=1 00:05:05.373 00:05:05.373 ' 00:05:05.373 18:15:33 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:05.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.373 --rc genhtml_branch_coverage=1 00:05:05.373 --rc genhtml_function_coverage=1 00:05:05.373 --rc genhtml_legend=1 00:05:05.373 --rc geninfo_all_blocks=1 00:05:05.373 --rc geninfo_unexecuted_blocks=1 00:05:05.373 00:05:05.373 ' 00:05:05.373 18:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.373 18:15:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.373 18:15:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.373 18:15:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.373 18:15:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.373 18:15:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:05.373 18:15:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:05.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:05.373 18:15:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:05.373 18:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:05.373 18:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:05.373 18:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:05.373 18:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:05.373 18:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:05.373 18:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:05.373 18:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:05.373 18:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:05.373 18:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:05.373 18:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:05.373 18:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:05.373 INFO: launching applications... 00:05:05.374 18:15:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:05.374 18:15:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:05.374 18:15:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:05.374 18:15:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:05.374 18:15:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:05.374 18:15:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:05.374 18:15:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.374 18:15:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.374 18:15:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1064297 00:05:05.374 18:15:33 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:05.374 18:15:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:05.374 Waiting for target to run... 00:05:05.374 18:15:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1064297 /var/tmp/spdk_tgt.sock 00:05:05.374 18:15:33 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1064297 ']' 00:05:05.374 18:15:33 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.374 18:15:33 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.374 18:15:33 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.374 18:15:33 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.374 18:15:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:05.374 [2024-10-08 18:15:33.862511] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:05:05.374 [2024-10-08 18:15:33.862737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064297 ] 00:05:06.016 [2024-10-08 18:15:34.530958] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.274 [2024-10-08 18:15:34.726975] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.213 18:15:35 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.213 18:15:35 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:07.213 18:15:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:07.213 00:05:07.213 18:15:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:07.213 INFO: shutting down applications... 00:05:07.213 18:15:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:07.213 18:15:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:07.213 18:15:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:07.213 18:15:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1064297 ]] 00:05:07.213 18:15:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1064297 00:05:07.213 18:15:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:07.213 18:15:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.213 18:15:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1064297 00:05:07.213 18:15:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.472 18:15:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.472 18:15:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.472 18:15:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1064297 00:05:07.472 18:15:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.039 18:15:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.039 18:15:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.039 18:15:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1064297 00:05:08.039 18:15:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:08.039 18:15:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:08.039 18:15:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:08.039 18:15:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:08.039 SPDK target shutdown done 00:05:08.039 18:15:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:08.039 Success 00:05:08.039 00:05:08.039 real 0m2.910s 00:05:08.039 user 0m2.899s 00:05:08.039 sys 0m0.876s 00:05:08.039 18:15:36 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.039 18:15:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:08.039 ************************************ 00:05:08.039 END TEST json_config_extra_key 00:05:08.039 ************************************ 00:05:08.039 18:15:36 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.039 18:15:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.039 18:15:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.039 18:15:36 -- common/autotest_common.sh@10 -- # set +x 00:05:08.039 ************************************ 00:05:08.039 START TEST alias_rpc 00:05:08.039 ************************************ 00:05:08.039 18:15:36 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.297 * Looking for test storage... 00:05:08.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:08.297 18:15:36 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:08.297 18:15:36 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:08.297 18:15:36 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:08.297 18:15:36 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:08.297 18:15:36 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.298 18:15:36 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.298 18:15:36 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.298 18:15:36 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:08.298 18:15:36 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.298 18:15:36 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:08.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.298 --rc genhtml_branch_coverage=1 00:05:08.298 --rc genhtml_function_coverage=1 00:05:08.298 --rc genhtml_legend=1 00:05:08.298 --rc geninfo_all_blocks=1 00:05:08.298 --rc geninfo_unexecuted_blocks=1 00:05:08.298 00:05:08.298 ' 00:05:08.298 18:15:36 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:08.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.298 --rc genhtml_branch_coverage=1 00:05:08.298 --rc genhtml_function_coverage=1 00:05:08.298 --rc genhtml_legend=1 00:05:08.298 --rc geninfo_all_blocks=1 00:05:08.298 --rc geninfo_unexecuted_blocks=1 00:05:08.298 00:05:08.298 ' 00:05:08.298 18:15:36 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:08.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.298 --rc genhtml_branch_coverage=1 00:05:08.298 --rc genhtml_function_coverage=1 00:05:08.298 --rc genhtml_legend=1 00:05:08.298 --rc geninfo_all_blocks=1 00:05:08.298 --rc geninfo_unexecuted_blocks=1 00:05:08.298 00:05:08.298 ' 00:05:08.298 18:15:36 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:08.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.298 --rc genhtml_branch_coverage=1 00:05:08.298 --rc genhtml_function_coverage=1 00:05:08.298 --rc genhtml_legend=1 00:05:08.298 --rc geninfo_all_blocks=1 00:05:08.298 --rc geninfo_unexecuted_blocks=1 00:05:08.298 00:05:08.298 ' 00:05:08.298 18:15:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:08.298 18:15:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1064748 00:05:08.298 18:15:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.298 18:15:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1064748 00:05:08.298 18:15:36 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1064748 ']' 00:05:08.298 18:15:36 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.298 18:15:36 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.298 18:15:36 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.298 18:15:36 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.298 18:15:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.298 [2024-10-08 18:15:36.741493] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:05:08.298 [2024-10-08 18:15:36.741593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064748 ] 00:05:08.298 [2024-10-08 18:15:36.813789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.557 [2024-10-08 18:15:36.986633] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.126 18:15:37 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.126 18:15:37 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:09.126 18:15:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:09.384 18:15:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1064748 00:05:09.384 18:15:37 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1064748 ']' 00:05:09.384 18:15:37 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1064748 00:05:09.384 18:15:37 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:09.384 18:15:37 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.384 18:15:37 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1064748 00:05:09.384 18:15:37 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.384 18:15:37 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.384 18:15:37 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1064748' 00:05:09.384 killing process with pid 1064748 00:05:09.384 18:15:37 alias_rpc -- common/autotest_common.sh@969 -- # kill 1064748 00:05:09.384 18:15:37 alias_rpc -- common/autotest_common.sh@974 -- # wait 1064748 00:05:10.329 00:05:10.329 real 0m2.117s 00:05:10.329 user 0m2.154s 00:05:10.329 sys 0m0.676s 00:05:10.329 18:15:38 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.329 18:15:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.329 ************************************ 00:05:10.329 END TEST alias_rpc 00:05:10.329 ************************************ 00:05:10.329 18:15:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:10.329 18:15:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:10.329 18:15:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.329 18:15:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.329 18:15:38 -- common/autotest_common.sh@10 -- # set +x 00:05:10.329 ************************************ 00:05:10.329 START TEST spdkcli_tcp 00:05:10.329 ************************************ 00:05:10.329 18:15:38 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:10.329 * Looking for test storage... 00:05:10.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:10.329 18:15:38 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:10.329 18:15:38 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:10.329 18:15:38 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:10.329 18:15:38 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.329 18:15:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:10.589 18:15:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.589 18:15:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:10.589 18:15:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:10.589 18:15:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.589 18:15:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:10.589 18:15:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.589 18:15:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.589 18:15:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.589 18:15:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:10.589 18:15:38 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.589 18:15:38 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:10.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.589 --rc genhtml_branch_coverage=1 00:05:10.589 --rc genhtml_function_coverage=1 00:05:10.589 --rc genhtml_legend=1 00:05:10.589 --rc geninfo_all_blocks=1 00:05:10.589 --rc geninfo_unexecuted_blocks=1 00:05:10.589 00:05:10.589 ' 00:05:10.589 18:15:38 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:10.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.589 --rc genhtml_branch_coverage=1 00:05:10.589 --rc genhtml_function_coverage=1 00:05:10.589 --rc genhtml_legend=1 00:05:10.589 --rc geninfo_all_blocks=1 00:05:10.589 --rc geninfo_unexecuted_blocks=1 00:05:10.589 00:05:10.589 ' 00:05:10.589 18:15:38 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:10.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.589 --rc genhtml_branch_coverage=1 00:05:10.589 --rc genhtml_function_coverage=1 00:05:10.589 --rc genhtml_legend=1 00:05:10.589 --rc geninfo_all_blocks=1 00:05:10.589 --rc geninfo_unexecuted_blocks=1 00:05:10.589 00:05:10.589 ' 00:05:10.589 18:15:38 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:10.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.589 --rc genhtml_branch_coverage=1 00:05:10.589 --rc genhtml_function_coverage=1 00:05:10.589 --rc genhtml_legend=1 00:05:10.589 --rc geninfo_all_blocks=1 00:05:10.589 --rc geninfo_unexecuted_blocks=1 00:05:10.589 00:05:10.589 ' 00:05:10.589 18:15:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:10.589 18:15:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:10.589 18:15:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:10.589 18:15:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:10.589 18:15:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:10.589 18:15:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:10.589 18:15:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:10.589 18:15:38 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.589 18:15:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.589 18:15:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1065068 00:05:10.589 18:15:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:10.589 18:15:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1065068 00:05:10.589 18:15:38 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1065068 ']' 00:05:10.589 18:15:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.589 18:15:38 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.589 18:15:38 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.589 18:15:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.589 18:15:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.589 [2024-10-08 18:15:38.935456] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:05:10.589 [2024-10-08 18:15:38.935569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065068 ] 00:05:10.589 [2024-10-08 18:15:39.037379] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.849 [2024-10-08 18:15:39.202631] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.849 [2024-10-08 18:15:39.202646] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.418 18:15:39 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.418 18:15:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:11.418 18:15:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1065090 00:05:11.418 18:15:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:11.418 18:15:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:11.677 [ 00:05:11.677 "bdev_malloc_delete", 00:05:11.677 "bdev_malloc_create", 00:05:11.677 "bdev_null_resize", 00:05:11.677 "bdev_null_delete", 00:05:11.677 "bdev_null_create", 00:05:11.677 "bdev_nvme_cuse_unregister", 00:05:11.677 "bdev_nvme_cuse_register", 00:05:11.677 "bdev_opal_new_user", 00:05:11.677 "bdev_opal_set_lock_state", 00:05:11.677 "bdev_opal_delete", 00:05:11.677 "bdev_opal_get_info", 00:05:11.677 "bdev_opal_create", 00:05:11.677 "bdev_nvme_opal_revert", 00:05:11.677 "bdev_nvme_opal_init", 00:05:11.677 "bdev_nvme_send_cmd", 00:05:11.677 "bdev_nvme_set_keys", 00:05:11.677 "bdev_nvme_get_path_iostat", 00:05:11.677 "bdev_nvme_get_mdns_discovery_info", 00:05:11.677 "bdev_nvme_stop_mdns_discovery", 00:05:11.677 "bdev_nvme_start_mdns_discovery", 00:05:11.677 "bdev_nvme_set_multipath_policy", 00:05:11.677 "bdev_nvme_set_preferred_path", 00:05:11.677 "bdev_nvme_get_io_paths", 00:05:11.677 "bdev_nvme_remove_error_injection", 00:05:11.677 "bdev_nvme_add_error_injection", 00:05:11.677 "bdev_nvme_get_discovery_info", 00:05:11.677 "bdev_nvme_stop_discovery", 00:05:11.677 "bdev_nvme_start_discovery", 00:05:11.677 "bdev_nvme_get_controller_health_info", 00:05:11.677 "bdev_nvme_disable_controller", 00:05:11.677 "bdev_nvme_enable_controller", 00:05:11.677 "bdev_nvme_reset_controller", 00:05:11.677 "bdev_nvme_get_transport_statistics", 00:05:11.677 "bdev_nvme_apply_firmware", 00:05:11.677 "bdev_nvme_detach_controller", 00:05:11.677 "bdev_nvme_get_controllers", 00:05:11.677 "bdev_nvme_attach_controller", 00:05:11.677 "bdev_nvme_set_hotplug", 00:05:11.677 "bdev_nvme_set_options", 00:05:11.677 "bdev_passthru_delete", 00:05:11.677 "bdev_passthru_create", 00:05:11.677 "bdev_lvol_set_parent_bdev", 00:05:11.677 "bdev_lvol_set_parent", 00:05:11.677 "bdev_lvol_check_shallow_copy", 00:05:11.677 "bdev_lvol_start_shallow_copy", 00:05:11.677 "bdev_lvol_grow_lvstore", 00:05:11.677 "bdev_lvol_get_lvols", 00:05:11.677 "bdev_lvol_get_lvstores", 00:05:11.677 "bdev_lvol_delete", 00:05:11.677 "bdev_lvol_set_read_only", 00:05:11.677 "bdev_lvol_resize", 00:05:11.677 "bdev_lvol_decouple_parent", 00:05:11.677 "bdev_lvol_inflate", 00:05:11.677 "bdev_lvol_rename", 00:05:11.677 "bdev_lvol_clone_bdev", 00:05:11.677 "bdev_lvol_clone", 00:05:11.677 "bdev_lvol_snapshot", 00:05:11.677 "bdev_lvol_create", 00:05:11.677 "bdev_lvol_delete_lvstore", 00:05:11.677 "bdev_lvol_rename_lvstore", 00:05:11.677 "bdev_lvol_create_lvstore", 00:05:11.677 "bdev_raid_set_options", 00:05:11.677 "bdev_raid_remove_base_bdev", 00:05:11.677 "bdev_raid_add_base_bdev", 00:05:11.677 "bdev_raid_delete", 00:05:11.677 "bdev_raid_create", 00:05:11.677 "bdev_raid_get_bdevs", 00:05:11.677 "bdev_error_inject_error", 00:05:11.677 "bdev_error_delete", 00:05:11.677 "bdev_error_create", 00:05:11.677 "bdev_split_delete", 00:05:11.677 "bdev_split_create", 00:05:11.677 "bdev_delay_delete", 00:05:11.677 "bdev_delay_create", 00:05:11.677 "bdev_delay_update_latency", 00:05:11.677 "bdev_zone_block_delete", 00:05:11.677 "bdev_zone_block_create", 00:05:11.677 "blobfs_create", 00:05:11.677 "blobfs_detect", 00:05:11.677 "blobfs_set_cache_size", 00:05:11.677 "bdev_aio_delete", 00:05:11.677 "bdev_aio_rescan", 00:05:11.677 "bdev_aio_create", 00:05:11.677 "bdev_ftl_set_property", 00:05:11.677 "bdev_ftl_get_properties", 00:05:11.677 "bdev_ftl_get_stats", 00:05:11.677 "bdev_ftl_unmap", 00:05:11.677 "bdev_ftl_unload", 00:05:11.677 "bdev_ftl_delete", 00:05:11.677 "bdev_ftl_load", 00:05:11.677 "bdev_ftl_create", 00:05:11.677 "bdev_virtio_attach_controller", 00:05:11.677 "bdev_virtio_scsi_get_devices", 00:05:11.677 "bdev_virtio_detach_controller", 00:05:11.677 "bdev_virtio_blk_set_hotplug", 00:05:11.677 "bdev_iscsi_delete", 00:05:11.677 "bdev_iscsi_create", 00:05:11.677 "bdev_iscsi_set_options", 00:05:11.677 "accel_error_inject_error", 00:05:11.677 "ioat_scan_accel_module", 00:05:11.677 "dsa_scan_accel_module", 00:05:11.677 "iaa_scan_accel_module", 00:05:11.677 "vfu_virtio_create_fs_endpoint", 00:05:11.677 "vfu_virtio_create_scsi_endpoint", 00:05:11.677 "vfu_virtio_scsi_remove_target", 00:05:11.677 "vfu_virtio_scsi_add_target", 00:05:11.677 "vfu_virtio_create_blk_endpoint", 00:05:11.677 "vfu_virtio_delete_endpoint", 00:05:11.677 "keyring_file_remove_key", 00:05:11.677 "keyring_file_add_key", 00:05:11.677 "keyring_linux_set_options", 00:05:11.677 "fsdev_aio_delete", 00:05:11.677 "fsdev_aio_create", 00:05:11.677 "iscsi_get_histogram", 00:05:11.677 "iscsi_enable_histogram", 00:05:11.677 "iscsi_set_options", 00:05:11.677 "iscsi_get_auth_groups", 00:05:11.678 "iscsi_auth_group_remove_secret", 00:05:11.678 "iscsi_auth_group_add_secret", 00:05:11.678 "iscsi_delete_auth_group", 00:05:11.678 "iscsi_create_auth_group", 00:05:11.678 "iscsi_set_discovery_auth", 00:05:11.678 "iscsi_get_options", 00:05:11.678 "iscsi_target_node_request_logout", 00:05:11.678 "iscsi_target_node_set_redirect", 00:05:11.678 "iscsi_target_node_set_auth", 00:05:11.678 "iscsi_target_node_add_lun", 00:05:11.678 "iscsi_get_stats", 00:05:11.678 "iscsi_get_connections", 00:05:11.678 "iscsi_portal_group_set_auth", 00:05:11.678 "iscsi_start_portal_group", 00:05:11.678 "iscsi_delete_portal_group", 00:05:11.678 "iscsi_create_portal_group", 00:05:11.678 "iscsi_get_portal_groups", 00:05:11.678 "iscsi_delete_target_node", 00:05:11.678 "iscsi_target_node_remove_pg_ig_maps", 00:05:11.678 "iscsi_target_node_add_pg_ig_maps", 00:05:11.678 "iscsi_create_target_node", 00:05:11.678 "iscsi_get_target_nodes", 00:05:11.678 "iscsi_delete_initiator_group", 00:05:11.678 "iscsi_initiator_group_remove_initiators", 00:05:11.678 "iscsi_initiator_group_add_initiators", 00:05:11.678 "iscsi_create_initiator_group", 00:05:11.678 "iscsi_get_initiator_groups", 00:05:11.678 "nvmf_set_crdt", 00:05:11.678 "nvmf_set_config", 00:05:11.678 "nvmf_set_max_subsystems", 00:05:11.678 "nvmf_stop_mdns_prr", 00:05:11.678 "nvmf_publish_mdns_prr", 00:05:11.678 "nvmf_subsystem_get_listeners", 00:05:11.678 "nvmf_subsystem_get_qpairs", 00:05:11.678 "nvmf_subsystem_get_controllers", 00:05:11.678 "nvmf_get_stats", 00:05:11.678 "nvmf_get_transports", 00:05:11.678 "nvmf_create_transport", 00:05:11.678 "nvmf_get_targets", 00:05:11.678 "nvmf_delete_target", 00:05:11.678 "nvmf_create_target", 00:05:11.678 "nvmf_subsystem_allow_any_host", 00:05:11.678 "nvmf_subsystem_set_keys", 00:05:11.678 "nvmf_subsystem_remove_host", 00:05:11.678 "nvmf_subsystem_add_host", 00:05:11.678 "nvmf_ns_remove_host", 00:05:11.678 "nvmf_ns_add_host", 00:05:11.678 "nvmf_subsystem_remove_ns", 00:05:11.678 "nvmf_subsystem_set_ns_ana_group", 00:05:11.678 "nvmf_subsystem_add_ns", 00:05:11.678 "nvmf_subsystem_listener_set_ana_state", 00:05:11.678 "nvmf_discovery_get_referrals", 00:05:11.678 "nvmf_discovery_remove_referral", 00:05:11.678 "nvmf_discovery_add_referral", 00:05:11.678 "nvmf_subsystem_remove_listener", 00:05:11.678 "nvmf_subsystem_add_listener", 00:05:11.678 "nvmf_delete_subsystem", 00:05:11.678 "nvmf_create_subsystem", 00:05:11.678 "nvmf_get_subsystems", 00:05:11.678 "env_dpdk_get_mem_stats", 00:05:11.678 "nbd_get_disks", 00:05:11.678 "nbd_stop_disk", 00:05:11.678 "nbd_start_disk", 00:05:11.678 "ublk_recover_disk", 00:05:11.678 "ublk_get_disks", 00:05:11.678 "ublk_stop_disk", 00:05:11.678 "ublk_start_disk", 00:05:11.678 "ublk_destroy_target", 00:05:11.678 "ublk_create_target", 00:05:11.678 "virtio_blk_create_transport", 00:05:11.678 "virtio_blk_get_transports", 00:05:11.678 "vhost_controller_set_coalescing", 00:05:11.678 "vhost_get_controllers", 00:05:11.678 "vhost_delete_controller", 00:05:11.678 "vhost_create_blk_controller", 00:05:11.678 "vhost_scsi_controller_remove_target", 00:05:11.678 "vhost_scsi_controller_add_target", 00:05:11.678 "vhost_start_scsi_controller", 00:05:11.678 "vhost_create_scsi_controller", 00:05:11.678 "thread_set_cpumask", 00:05:11.678 "scheduler_set_options", 00:05:11.678 "framework_get_governor", 00:05:11.678 "framework_get_scheduler", 00:05:11.678 "framework_set_scheduler", 00:05:11.678 "framework_get_reactors", 00:05:11.678 "thread_get_io_channels", 00:05:11.678 "thread_get_pollers", 00:05:11.678 "thread_get_stats", 00:05:11.678 "framework_monitor_context_switch", 00:05:11.678 "spdk_kill_instance", 00:05:11.678 "log_enable_timestamps", 00:05:11.678 "log_get_flags", 00:05:11.678 "log_clear_flag", 00:05:11.678 "log_set_flag", 00:05:11.678 "log_get_level", 00:05:11.678 "log_set_level", 00:05:11.678 "log_get_print_level", 00:05:11.678 "log_set_print_level", 00:05:11.678 "framework_enable_cpumask_locks", 00:05:11.678 "framework_disable_cpumask_locks", 00:05:11.678 "framework_wait_init", 00:05:11.678 "framework_start_init", 00:05:11.678 "scsi_get_devices", 00:05:11.678 "bdev_get_histogram", 00:05:11.678 "bdev_enable_histogram", 00:05:11.678 "bdev_set_qos_limit", 00:05:11.678 "bdev_set_qd_sampling_period", 00:05:11.678 "bdev_get_bdevs", 00:05:11.678 "bdev_reset_iostat", 00:05:11.678 "bdev_get_iostat", 00:05:11.678 "bdev_examine", 00:05:11.678 "bdev_wait_for_examine", 00:05:11.678 "bdev_set_options", 00:05:11.678 "accel_get_stats", 00:05:11.678 "accel_set_options", 00:05:11.678 "accel_set_driver", 00:05:11.678 "accel_crypto_key_destroy", 00:05:11.678 "accel_crypto_keys_get", 00:05:11.678 "accel_crypto_key_create", 00:05:11.678 "accel_assign_opc", 00:05:11.678 "accel_get_module_info", 00:05:11.678 "accel_get_opc_assignments", 00:05:11.678 "vmd_rescan", 00:05:11.678 "vmd_remove_device", 00:05:11.678 "vmd_enable", 00:05:11.678 "sock_get_default_impl", 00:05:11.678 "sock_set_default_impl", 00:05:11.678 "sock_impl_set_options", 00:05:11.678 "sock_impl_get_options", 00:05:11.678 "iobuf_get_stats", 00:05:11.678 "iobuf_set_options", 00:05:11.678 "keyring_get_keys", 00:05:11.678 "vfu_tgt_set_base_path", 00:05:11.678 "framework_get_pci_devices", 00:05:11.678 "framework_get_config", 00:05:11.678 "framework_get_subsystems", 00:05:11.678 "fsdev_set_opts", 00:05:11.678 "fsdev_get_opts", 00:05:11.678 "trace_get_info", 00:05:11.678 "trace_get_tpoint_group_mask", 00:05:11.678 "trace_disable_tpoint_group", 00:05:11.678 "trace_enable_tpoint_group", 00:05:11.678 "trace_clear_tpoint_mask", 00:05:11.678 "trace_set_tpoint_mask", 00:05:11.678 "notify_get_notifications", 00:05:11.678 "notify_get_types", 00:05:11.678 "spdk_get_version", 00:05:11.678 "rpc_get_methods" 00:05:11.678 ] 00:05:11.678 18:15:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:11.678 18:15:40 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.678 18:15:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.678 18:15:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:11.678 18:15:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1065068 00:05:11.678 18:15:40 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1065068 ']' 00:05:11.678 18:15:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1065068 00:05:11.678 18:15:40 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:11.678 18:15:40 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.678 18:15:40 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1065068 00:05:11.678 18:15:40 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.678 18:15:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.678 18:15:40 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1065068' 00:05:11.678 killing process with pid 1065068 00:05:11.678 18:15:40 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1065068 00:05:11.678 18:15:40 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1065068 00:05:12.247 00:05:12.247 real 0m2.038s 00:05:12.247 user 0m3.589s 00:05:12.247 sys 0m0.752s 00:05:12.247 18:15:40 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.247 18:15:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.247 ************************************ 00:05:12.247 END TEST spdkcli_tcp 00:05:12.247 ************************************ 00:05:12.247 18:15:40 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.247 18:15:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.247 18:15:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.247 18:15:40 -- common/autotest_common.sh@10 -- # set +x 00:05:12.507 ************************************ 00:05:12.507 START TEST dpdk_mem_utility 00:05:12.507 ************************************ 00:05:12.507 18:15:40 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.507 * Looking for test storage... 00:05:12.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:12.507 18:15:40 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:12.507 18:15:40 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:12.507 18:15:40 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:12.765 18:15:41 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.765 18:15:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:12.765 18:15:41 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.765 18:15:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:12.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.765 --rc genhtml_branch_coverage=1 00:05:12.765 --rc genhtml_function_coverage=1 00:05:12.765 --rc genhtml_legend=1 00:05:12.765 --rc geninfo_all_blocks=1 00:05:12.765 --rc geninfo_unexecuted_blocks=1 00:05:12.765 00:05:12.765 ' 00:05:12.765 18:15:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:12.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.765 --rc genhtml_branch_coverage=1 00:05:12.765 --rc genhtml_function_coverage=1 00:05:12.765 --rc genhtml_legend=1 00:05:12.765 --rc geninfo_all_blocks=1 00:05:12.765 --rc geninfo_unexecuted_blocks=1 00:05:12.765 00:05:12.765 ' 00:05:12.765 18:15:41 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:12.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.765 --rc genhtml_branch_coverage=1 00:05:12.765 --rc genhtml_function_coverage=1 00:05:12.765 --rc genhtml_legend=1 00:05:12.765 --rc geninfo_all_blocks=1 00:05:12.765 --rc geninfo_unexecuted_blocks=1 00:05:12.765 00:05:12.765 ' 00:05:12.765 18:15:41 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:12.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.765 --rc genhtml_branch_coverage=1 00:05:12.765 --rc genhtml_function_coverage=1 00:05:12.765 --rc genhtml_legend=1 00:05:12.765 --rc geninfo_all_blocks=1 00:05:12.765 --rc geninfo_unexecuted_blocks=1 00:05:12.765 00:05:12.765 ' 00:05:12.765 18:15:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:12.765 18:15:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1065390 00:05:12.765 18:15:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.765 18:15:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1065390 00:05:12.765 18:15:41 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1065390 ']' 00:05:12.765 18:15:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.765 18:15:41 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.765 18:15:41 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.765 18:15:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.765 18:15:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.765 [2024-10-08 18:15:41.140736] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:05:12.765 [2024-10-08 18:15:41.140852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065390 ] 00:05:12.765 [2024-10-08 18:15:41.249709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.025 [2024-10-08 18:15:41.449677] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.963 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.963 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:13.963 18:15:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:13.963 18:15:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:13.963 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.963 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.963 { 00:05:13.963 "filename": "/tmp/spdk_mem_dump.txt" 00:05:13.963 } 00:05:13.963 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.963 18:15:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:13.963 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:13.963 1 heaps totaling size 860.000000 MiB 00:05:13.963 size: 860.000000 MiB heap id: 0 00:05:13.963 end heaps---------- 00:05:13.963 9 mempools totaling size 642.649841 MiB 00:05:13.963 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:13.963 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:13.963 size: 92.545471 MiB name: bdev_io_1065390 00:05:13.963 size: 51.011292 MiB name: evtpool_1065390 00:05:13.963 size: 50.003479 MiB name: msgpool_1065390 00:05:13.963 size: 36.509338 MiB name: fsdev_io_1065390 00:05:13.963 size: 21.763794 MiB name: PDU_Pool 00:05:13.963 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:13.963 size: 0.026123 MiB name: Session_Pool 00:05:13.963 end mempools------- 00:05:13.963 6 memzones totaling size 4.142822 MiB 00:05:13.963 size: 1.000366 MiB name: RG_ring_0_1065390 00:05:13.963 size: 1.000366 MiB name: RG_ring_1_1065390 00:05:13.963 size: 1.000366 MiB name: RG_ring_4_1065390 00:05:13.963 size: 1.000366 MiB name: RG_ring_5_1065390 00:05:13.963 size: 0.125366 MiB name: RG_ring_2_1065390 00:05:13.963 size: 0.015991 MiB name: RG_ring_3_1065390 00:05:13.963 end memzones------- 00:05:13.963 18:15:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:14.223 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:14.223 list of free elements. size: 13.984680 MiB 00:05:14.223 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:14.223 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:14.223 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:14.223 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:14.223 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:14.223 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:14.223 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:14.223 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:14.223 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:14.223 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:14.223 element at address: 0x200003e00000 with size: 0.495422 MiB 00:05:14.223 element at address: 0x20000d800000 with size: 0.490723 MiB 00:05:14.224 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:14.224 element at address: 0x200007000000 with size: 0.481934 MiB 00:05:14.224 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:14.224 element at address: 0x200003a00000 with size: 0.355042 MiB 00:05:14.224 list of standard malloc elements. size: 199.218628 MiB 00:05:14.224 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:14.224 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:14.224 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:14.224 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:14.224 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:14.224 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:14.224 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:14.224 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:14.224 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:14.224 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:14.224 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:14.224 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:14.224 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:14.224 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:14.224 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:14.224 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:14.224 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:05:14.224 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:14.224 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:05:14.224 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:14.224 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:05:14.224 element at address: 0x200003aff940 with size: 0.000183 MiB 00:05:14.224 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:14.224 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:14.224 element at address: 0x200003eff000 with size: 0.000183 MiB 00:05:14.224 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:14.224 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:14.224 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:14.224 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:14.224 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:14.224 list of memzone associated elements. size: 646.796692 MiB 00:05:14.224 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:14.224 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:14.224 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:14.224 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:14.224 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:14.224 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1065390_0 00:05:14.224 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:14.224 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1065390_0 00:05:14.224 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:14.224 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1065390_0 00:05:14.224 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:14.224 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1065390_0 00:05:14.224 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:14.224 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:14.224 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:14.224 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:14.224 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:14.224 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1065390 00:05:14.224 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:14.224 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1065390 00:05:14.224 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:14.224 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1065390 00:05:14.224 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:14.224 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:14.224 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:14.224 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:14.224 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:14.224 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:14.224 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:14.224 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:14.224 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:14.224 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1065390 00:05:14.224 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:14.224 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1065390 00:05:14.224 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:14.224 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1065390 00:05:14.224 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:14.224 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1065390 00:05:14.224 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:05:14.224 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1065390 00:05:14.224 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:05:14.224 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1065390 00:05:14.224 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:14.224 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:14.224 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:14.224 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:14.224 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:14.224 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:14.224 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:05:14.224 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1065390 00:05:14.224 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:14.224 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:14.224 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:14.224 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:14.224 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:05:14.224 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1065390 00:05:14.224 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:14.224 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:14.224 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:14.224 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1065390 00:05:14.224 element at address: 0x200003affa00 with size: 0.000305 MiB 00:05:14.224 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1065390 00:05:14.224 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:05:14.224 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1065390 00:05:14.224 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:14.224 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:14.224 18:15:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:14.224 18:15:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1065390 00:05:14.224 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1065390 ']' 00:05:14.224 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1065390 00:05:14.224 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:14.224 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.224 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1065390 00:05:14.224 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.224 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.224 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1065390' 00:05:14.224 killing process with pid 1065390 00:05:14.224 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1065390 00:05:14.224 18:15:42 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1065390 00:05:14.794 00:05:14.794 real 0m2.514s 00:05:14.794 user 0m2.744s 00:05:14.794 sys 0m0.797s 00:05:14.794 18:15:43 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.794 18:15:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.794 ************************************ 00:05:14.794 END TEST dpdk_mem_utility 00:05:14.794 ************************************ 00:05:15.054 18:15:43 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:15.054 18:15:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.054 18:15:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.054 18:15:43 -- common/autotest_common.sh@10 -- # set +x 00:05:15.054 ************************************ 00:05:15.054 START TEST event 00:05:15.054 ************************************ 00:05:15.054 18:15:43 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:15.054 * Looking for test storage... 00:05:15.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:15.054 18:15:43 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:15.054 18:15:43 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:15.054 18:15:43 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:15.314 18:15:43 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:15.314 18:15:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.314 18:15:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.314 18:15:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.314 18:15:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.314 18:15:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.314 18:15:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.314 18:15:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.314 18:15:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.314 18:15:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.314 18:15:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.315 18:15:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.315 18:15:43 event -- scripts/common.sh@344 -- # case "$op" in 00:05:15.315 18:15:43 event -- scripts/common.sh@345 -- # : 1 00:05:15.315 18:15:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.315 18:15:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.315 18:15:43 event -- scripts/common.sh@365 -- # decimal 1 00:05:15.315 18:15:43 event -- scripts/common.sh@353 -- # local d=1 00:05:15.315 18:15:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.315 18:15:43 event -- scripts/common.sh@355 -- # echo 1 00:05:15.315 18:15:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.315 18:15:43 event -- scripts/common.sh@366 -- # decimal 2 00:05:15.315 18:15:43 event -- scripts/common.sh@353 -- # local d=2 00:05:15.315 18:15:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.315 18:15:43 event -- scripts/common.sh@355 -- # echo 2 00:05:15.315 18:15:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.315 18:15:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.315 18:15:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.315 18:15:43 event -- scripts/common.sh@368 -- # return 0 00:05:15.315 18:15:43 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.315 18:15:43 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:15.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.315 --rc genhtml_branch_coverage=1 00:05:15.315 --rc genhtml_function_coverage=1 00:05:15.315 --rc genhtml_legend=1 00:05:15.315 --rc geninfo_all_blocks=1 00:05:15.315 --rc geninfo_unexecuted_blocks=1 00:05:15.315 00:05:15.315 ' 00:05:15.315 18:15:43 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:15.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.315 --rc genhtml_branch_coverage=1 00:05:15.315 --rc genhtml_function_coverage=1 00:05:15.315 --rc genhtml_legend=1 00:05:15.315 --rc geninfo_all_blocks=1 00:05:15.315 --rc geninfo_unexecuted_blocks=1 00:05:15.315 00:05:15.315 ' 00:05:15.315 18:15:43 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:15.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.315 --rc genhtml_branch_coverage=1 00:05:15.315 --rc genhtml_function_coverage=1 00:05:15.315 --rc genhtml_legend=1 00:05:15.315 --rc geninfo_all_blocks=1 00:05:15.315 --rc geninfo_unexecuted_blocks=1 00:05:15.315 00:05:15.315 ' 00:05:15.315 18:15:43 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:15.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.315 --rc genhtml_branch_coverage=1 00:05:15.315 --rc genhtml_function_coverage=1 00:05:15.315 --rc genhtml_legend=1 00:05:15.315 --rc geninfo_all_blocks=1 00:05:15.315 --rc geninfo_unexecuted_blocks=1 00:05:15.315 00:05:15.315 ' 00:05:15.315 18:15:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:15.315 18:15:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:15.315 18:15:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.315 18:15:43 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:15.315 18:15:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.315 18:15:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.315 ************************************ 00:05:15.315 START TEST event_perf 00:05:15.315 ************************************ 00:05:15.315 18:15:43 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.315 Running I/O for 1 seconds...[2024-10-08 18:15:43.723292] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:05:15.315 [2024-10-08 18:15:43.723439] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065748 ] 00:05:15.315 [2024-10-08 18:15:43.828433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.578 [2024-10-08 18:15:44.043439] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.578 [2024-10-08 18:15:44.043538] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.578 [2024-10-08 18:15:44.043631] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.578 [2024-10-08 18:15:44.043635] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.960 Running I/O for 1 seconds... 00:05:16.960 lcore 0: 205127 00:05:16.960 lcore 1: 205126 00:05:16.960 lcore 2: 205126 00:05:16.960 lcore 3: 205127 00:05:16.960 done. 00:05:16.960 00:05:16.960 real 0m1.540s 00:05:16.960 user 0m4.386s 00:05:16.960 sys 0m0.144s 00:05:16.960 18:15:45 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.960 18:15:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.960 ************************************ 00:05:16.960 END TEST event_perf 00:05:16.960 ************************************ 00:05:16.960 18:15:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:16.960 18:15:45 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:16.960 18:15:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.960 18:15:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.960 ************************************ 00:05:16.960 START TEST event_reactor 00:05:16.960 ************************************ 00:05:16.960 18:15:45 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:16.960 [2024-10-08 18:15:45.314365] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:05:16.960 [2024-10-08 18:15:45.314432] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065912 ] 00:05:16.960 [2024-10-08 18:15:45.421062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.219 [2024-10-08 18:15:45.639171] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.601 test_start 00:05:18.601 oneshot 00:05:18.601 tick 100 00:05:18.601 tick 100 00:05:18.601 tick 250 00:05:18.601 tick 100 00:05:18.601 tick 100 00:05:18.601 tick 100 00:05:18.601 tick 250 00:05:18.601 tick 500 00:05:18.601 tick 100 00:05:18.601 tick 100 00:05:18.601 tick 250 00:05:18.601 tick 100 00:05:18.601 tick 100 00:05:18.601 test_end 00:05:18.601 00:05:18.601 real 0m1.549s 00:05:18.601 user 0m1.407s 00:05:18.601 sys 0m0.130s 00:05:18.601 18:15:46 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.601 18:15:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:18.602 ************************************ 00:05:18.602 END TEST event_reactor 00:05:18.602 ************************************ 00:05:18.602 18:15:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.602 18:15:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:18.602 18:15:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.602 18:15:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.602 ************************************ 00:05:18.602 START TEST event_reactor_perf 00:05:18.602 ************************************ 00:05:18.602 18:15:46 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.602 [2024-10-08 18:15:46.918601] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:05:18.602 [2024-10-08 18:15:46.918762] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066135 ] 00:05:18.602 [2024-10-08 18:15:47.019511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.862 [2024-10-08 18:15:47.239608] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.243 test_start 00:05:20.244 test_end 00:05:20.244 Performance: 159538 events per second 00:05:20.244 00:05:20.244 real 0m1.534s 00:05:20.244 user 0m1.385s 00:05:20.244 sys 0m0.136s 00:05:20.244 18:15:48 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.244 18:15:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.244 ************************************ 00:05:20.244 END TEST event_reactor_perf 00:05:20.244 ************************************ 00:05:20.244 18:15:48 event -- event/event.sh@49 -- # uname -s 00:05:20.244 18:15:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:20.244 18:15:48 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:20.244 18:15:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.244 18:15:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.244 18:15:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.244 ************************************ 00:05:20.244 START TEST event_scheduler 00:05:20.244 ************************************ 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:20.244 * Looking for test storage... 00:05:20.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.244 18:15:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:20.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.244 --rc genhtml_branch_coverage=1 00:05:20.244 --rc genhtml_function_coverage=1 00:05:20.244 --rc genhtml_legend=1 00:05:20.244 --rc geninfo_all_blocks=1 00:05:20.244 --rc geninfo_unexecuted_blocks=1 00:05:20.244 00:05:20.244 ' 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:20.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.244 --rc genhtml_branch_coverage=1 00:05:20.244 --rc genhtml_function_coverage=1 00:05:20.244 --rc genhtml_legend=1 00:05:20.244 --rc geninfo_all_blocks=1 00:05:20.244 --rc geninfo_unexecuted_blocks=1 00:05:20.244 00:05:20.244 ' 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:20.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.244 --rc genhtml_branch_coverage=1 00:05:20.244 --rc genhtml_function_coverage=1 00:05:20.244 --rc genhtml_legend=1 00:05:20.244 --rc geninfo_all_blocks=1 00:05:20.244 --rc geninfo_unexecuted_blocks=1 00:05:20.244 00:05:20.244 ' 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:20.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.244 --rc genhtml_branch_coverage=1 00:05:20.244 --rc genhtml_function_coverage=1 00:05:20.244 --rc genhtml_legend=1 00:05:20.244 --rc geninfo_all_blocks=1 00:05:20.244 --rc geninfo_unexecuted_blocks=1 00:05:20.244 00:05:20.244 ' 00:05:20.244 18:15:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:20.244 18:15:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1066390 00:05:20.244 18:15:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:20.244 18:15:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.244 18:15:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1066390 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1066390 ']' 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.244 18:15:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.503 [2024-10-08 18:15:48.798728] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:05:20.503 [2024-10-08 18:15:48.798825] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066390 ] 00:05:20.503 [2024-10-08 18:15:48.876750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.762 [2024-10-08 18:15:49.088590] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.762 [2024-10-08 18:15:49.088759] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.762 [2024-10-08 18:15:49.088711] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.762 [2024-10-08 18:15:49.088763] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.762 18:15:49 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.762 18:15:49 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:20.762 18:15:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:20.762 18:15:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.762 18:15:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.762 [2024-10-08 18:15:49.125589] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:20.762 [2024-10-08 18:15:49.125620] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:20.762 [2024-10-08 18:15:49.125640] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:20.762 [2024-10-08 18:15:49.125663] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:20.762 [2024-10-08 18:15:49.125676] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:20.762 18:15:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.762 18:15:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:20.762 18:15:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.762 18:15:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.021 [2024-10-08 18:15:49.301433] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:21.021 18:15:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.021 18:15:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:21.021 18:15:49 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.021 18:15:49 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.021 18:15:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.021 ************************************ 00:05:21.021 START TEST scheduler_create_thread 00:05:21.021 ************************************ 00:05:21.021 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:21.021 18:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:21.021 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.021 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.021 2 00:05:21.021 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.021 18:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.022 3 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.022 4 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.022 5 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.022 6 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.022 7 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.022 8 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.022 9 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.022 10 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.022 18:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.398 18:15:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.398 18:15:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:22.398 18:15:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:22.398 18:15:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.398 18:15:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.776 18:15:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:23.776 00:05:23.776 real 0m2.622s 00:05:23.776 user 0m0.013s 00:05:23.776 sys 0m0.006s 00:05:23.776 18:15:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.776 18:15:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.776 ************************************ 00:05:23.776 END TEST scheduler_create_thread 00:05:23.776 ************************************ 00:05:23.776 18:15:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:23.776 18:15:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1066390 00:05:23.776 18:15:51 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1066390 ']' 00:05:23.776 18:15:51 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1066390 00:05:23.776 18:15:51 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:23.776 18:15:51 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.776 18:15:51 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1066390 00:05:23.776 18:15:52 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:23.776 18:15:52 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:23.776 18:15:52 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1066390' 00:05:23.776 killing process with pid 1066390 00:05:23.776 18:15:52 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1066390 00:05:23.776 18:15:52 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1066390 00:05:24.034 [2024-10-08 18:15:52.435255] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:24.602 00:05:24.602 real 0m4.346s 00:05:24.602 user 0m6.226s 00:05:24.602 sys 0m0.492s 00:05:24.602 18:15:52 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.602 18:15:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.602 ************************************ 00:05:24.602 END TEST event_scheduler 00:05:24.602 ************************************ 00:05:24.602 18:15:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:24.602 18:15:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:24.602 18:15:52 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.602 18:15:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.602 18:15:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.602 ************************************ 00:05:24.602 START TEST app_repeat 00:05:24.602 ************************************ 00:05:24.602 18:15:52 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1066958 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1066958' 00:05:24.602 Process app_repeat pid: 1066958 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:24.602 spdk_app_start Round 0 00:05:24.602 18:15:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1066958 /var/tmp/spdk-nbd.sock 00:05:24.602 18:15:52 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1066958 ']' 00:05:24.602 18:15:52 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.602 18:15:52 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.602 18:15:52 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.602 18:15:52 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.602 18:15:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.602 [2024-10-08 18:15:52.928974] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:05:24.602 [2024-10-08 18:15:52.929043] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066958 ] 00:05:24.602 [2024-10-08 18:15:53.037603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.861 [2024-10-08 18:15:53.264308] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.861 [2024-10-08 18:15:53.264327] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.120 18:15:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.120 18:15:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:25.120 18:15:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.688 Malloc0 00:05:25.688 18:15:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.258 Malloc1 00:05:26.258 18:15:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.258 18:15:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.827 /dev/nbd0 00:05:26.827 18:15:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.827 18:15:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.827 18:15:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:26.827 18:15:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:26.827 18:15:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:26.827 18:15:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:26.827 18:15:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:26.827 18:15:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:26.827 18:15:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:26.827 18:15:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:26.827 18:15:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.827 1+0 records in 00:05:26.827 1+0 records out 00:05:26.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0224078 s, 183 kB/s 00:05:26.828 18:15:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.828 18:15:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:26.828 18:15:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.828 18:15:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:26.828 18:15:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:26.828 18:15:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.828 18:15:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.828 18:15:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.396 /dev/nbd1 00:05:27.656 18:15:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.656 18:15:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.656 1+0 records in 00:05:27.656 1+0 records out 00:05:27.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308517 s, 13.3 MB/s 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:27.656 18:15:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:27.656 18:15:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.656 18:15:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.656 18:15:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.656 18:15:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.656 18:15:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.226 { 00:05:28.226 "nbd_device": "/dev/nbd0", 00:05:28.226 "bdev_name": "Malloc0" 00:05:28.226 }, 00:05:28.226 { 00:05:28.226 "nbd_device": "/dev/nbd1", 00:05:28.226 "bdev_name": "Malloc1" 00:05:28.226 } 00:05:28.226 ]' 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.226 { 00:05:28.226 "nbd_device": "/dev/nbd0", 00:05:28.226 "bdev_name": "Malloc0" 00:05:28.226 }, 00:05:28.226 { 00:05:28.226 "nbd_device": "/dev/nbd1", 00:05:28.226 "bdev_name": "Malloc1" 00:05:28.226 } 00:05:28.226 ]' 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.226 /dev/nbd1' 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.226 /dev/nbd1' 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.226 18:15:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.226 256+0 records in 00:05:28.227 256+0 records out 00:05:28.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00806473 s, 130 MB/s 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.227 256+0 records in 00:05:28.227 256+0 records out 00:05:28.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024645 s, 42.5 MB/s 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.227 256+0 records in 00:05:28.227 256+0 records out 00:05:28.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0394106 s, 26.6 MB/s 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.227 18:15:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.166 18:15:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.166 18:15:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.166 18:15:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.166 18:15:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.166 18:15:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.166 18:15:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.166 18:15:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.166 18:15:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.166 18:15:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.166 18:15:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.424 18:15:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.424 18:15:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.424 18:15:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.424 18:15:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.424 18:15:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.424 18:15:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.424 18:15:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.424 18:15:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.424 18:15:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.424 18:15:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.424 18:15:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.359 18:15:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.359 18:15:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.359 18:15:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.359 18:15:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.359 18:15:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.359 18:15:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.359 18:15:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:30.359 18:15:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.359 18:15:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.359 18:15:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.359 18:15:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.359 18:15:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.359 18:15:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.616 18:15:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.185 [2024-10-08 18:15:59.490274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.185 [2024-10-08 18:15:59.707771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.185 [2024-10-08 18:15:59.707772] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.467 [2024-10-08 18:15:59.791013] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.467 [2024-10-08 18:15:59.791137] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.013 18:16:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.013 18:16:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:34.013 spdk_app_start Round 1 00:05:34.013 18:16:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1066958 /var/tmp/spdk-nbd.sock 00:05:34.013 18:16:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1066958 ']' 00:05:34.013 18:16:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.013 18:16:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.013 18:16:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.013 18:16:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.013 18:16:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.013 18:16:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.013 18:16:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:34.013 18:16:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.273 Malloc0 00:05:34.273 18:16:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.843 Malloc1 00:05:34.843 18:16:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.843 18:16:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.779 /dev/nbd0 00:05:35.779 18:16:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.779 18:16:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.779 1+0 records in 00:05:35.779 1+0 records out 00:05:35.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239441 s, 17.1 MB/s 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:35.779 18:16:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:35.779 18:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.779 18:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.779 18:16:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.037 /dev/nbd1 00:05:36.037 18:16:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.037 18:16:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.037 1+0 records in 00:05:36.037 1+0 records out 00:05:36.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234539 s, 17.5 MB/s 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:36.037 18:16:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:36.037 18:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.037 18:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.037 18:16:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.037 18:16:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.037 18:16:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.295 { 00:05:36.295 "nbd_device": "/dev/nbd0", 00:05:36.295 "bdev_name": "Malloc0" 00:05:36.295 }, 00:05:36.295 { 00:05:36.295 "nbd_device": "/dev/nbd1", 00:05:36.295 "bdev_name": "Malloc1" 00:05:36.295 } 00:05:36.295 ]' 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.295 { 00:05:36.295 "nbd_device": "/dev/nbd0", 00:05:36.295 "bdev_name": "Malloc0" 00:05:36.295 }, 00:05:36.295 { 00:05:36.295 "nbd_device": "/dev/nbd1", 00:05:36.295 "bdev_name": "Malloc1" 00:05:36.295 } 00:05:36.295 ]' 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.295 /dev/nbd1' 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.295 /dev/nbd1' 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.295 18:16:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.296 18:16:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.296 256+0 records in 00:05:36.296 256+0 records out 00:05:36.296 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00528983 s, 198 MB/s 00:05:36.296 18:16:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.296 18:16:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.554 256+0 records in 00:05:36.554 256+0 records out 00:05:36.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254386 s, 41.2 MB/s 00:05:36.554 18:16:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.555 256+0 records in 00:05:36.555 256+0 records out 00:05:36.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0405672 s, 25.8 MB/s 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.555 18:16:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.814 18:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.814 18:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.814 18:16:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.814 18:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.814 18:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.815 18:16:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.815 18:16:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.815 18:16:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.815 18:16:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.815 18:16:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.754 18:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.754 18:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.754 18:16:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.754 18:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.754 18:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.754 18:16:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.754 18:16:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.754 18:16:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.754 18:16:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.754 18:16:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.754 18:16:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.012 18:16:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.012 18:16:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.012 18:16:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.012 18:16:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.013 18:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.013 18:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.013 18:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.013 18:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.013 18:16:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.013 18:16:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.013 18:16:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.013 18:16:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.013 18:16:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.582 18:16:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.841 [2024-10-08 18:16:07.242296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.101 [2024-10-08 18:16:07.456808] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.101 [2024-10-08 18:16:07.456825] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.101 [2024-10-08 18:16:07.559024] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.101 [2024-10-08 18:16:07.559154] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.645 18:16:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.645 18:16:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:41.645 spdk_app_start Round 2 00:05:41.645 18:16:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1066958 /var/tmp/spdk-nbd.sock 00:05:41.645 18:16:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1066958 ']' 00:05:41.645 18:16:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.645 18:16:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.645 18:16:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.645 18:16:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.645 18:16:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.904 18:16:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.904 18:16:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:41.904 18:16:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.241 Malloc0 00:05:42.241 18:16:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.500 Malloc1 00:05:42.500 18:16:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.500 18:16:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.069 /dev/nbd0 00:05:43.069 18:16:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.069 18:16:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.069 1+0 records in 00:05:43.069 1+0 records out 00:05:43.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019696 s, 20.8 MB/s 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:43.069 18:16:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:43.069 18:16:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.069 18:16:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.069 18:16:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.007 /dev/nbd1 00:05:44.007 18:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.007 18:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.007 1+0 records in 00:05:44.007 1+0 records out 00:05:44.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391575 s, 10.5 MB/s 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:44.007 18:16:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:44.007 18:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.007 18:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.007 18:16:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.007 18:16:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.007 18:16:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.574 { 00:05:44.574 "nbd_device": "/dev/nbd0", 00:05:44.574 "bdev_name": "Malloc0" 00:05:44.574 }, 00:05:44.574 { 00:05:44.574 "nbd_device": "/dev/nbd1", 00:05:44.574 "bdev_name": "Malloc1" 00:05:44.574 } 00:05:44.574 ]' 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.574 { 00:05:44.574 "nbd_device": "/dev/nbd0", 00:05:44.574 "bdev_name": "Malloc0" 00:05:44.574 }, 00:05:44.574 { 00:05:44.574 "nbd_device": "/dev/nbd1", 00:05:44.574 "bdev_name": "Malloc1" 00:05:44.574 } 00:05:44.574 ]' 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.574 /dev/nbd1' 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.574 /dev/nbd1' 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.574 256+0 records in 00:05:44.574 256+0 records out 00:05:44.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526158 s, 199 MB/s 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.574 18:16:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.574 256+0 records in 00:05:44.574 256+0 records out 00:05:44.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0435863 s, 24.1 MB/s 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.574 256+0 records in 00:05:44.574 256+0 records out 00:05:44.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269699 s, 38.9 MB/s 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.574 18:16:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.141 18:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.141 18:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.141 18:16:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.141 18:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.141 18:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.141 18:16:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.141 18:16:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.141 18:16:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.141 18:16:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.141 18:16:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.709 18:16:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.709 18:16:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.709 18:16:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.709 18:16:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.709 18:16:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.709 18:16:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.709 18:16:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.709 18:16:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.709 18:16:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.709 18:16:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.709 18:16:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.967 18:16:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.967 18:16:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.967 18:16:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.227 18:16:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.227 18:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.227 18:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.227 18:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.227 18:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.227 18:16:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.227 18:16:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.227 18:16:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.227 18:16:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.227 18:16:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.796 18:16:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.372 [2024-10-08 18:16:15.630085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.372 [2024-10-08 18:16:15.840556] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.372 [2024-10-08 18:16:15.840570] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.631 [2024-10-08 18:16:15.935347] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.631 [2024-10-08 18:16:15.935421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.171 18:16:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1066958 /var/tmp/spdk-nbd.sock 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1066958 ']' 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:50.171 18:16:18 event.app_repeat -- event/event.sh@39 -- # killprocess 1066958 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1066958 ']' 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1066958 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1066958 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1066958' 00:05:50.171 killing process with pid 1066958 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1066958 00:05:50.171 18:16:18 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1066958 00:05:50.740 spdk_app_start is called in Round 0. 00:05:50.740 Shutdown signal received, stop current app iteration 00:05:50.741 Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 reinitialization... 00:05:50.741 spdk_app_start is called in Round 1. 00:05:50.741 Shutdown signal received, stop current app iteration 00:05:50.741 Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 reinitialization... 00:05:50.741 spdk_app_start is called in Round 2. 00:05:50.741 Shutdown signal received, stop current app iteration 00:05:50.741 Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 reinitialization... 00:05:50.741 spdk_app_start is called in Round 3. 00:05:50.741 Shutdown signal received, stop current app iteration 00:05:50.741 18:16:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:50.741 18:16:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:50.741 00:05:50.741 real 0m26.089s 00:05:50.741 user 0m59.005s 00:05:50.741 sys 0m5.834s 00:05:50.741 18:16:18 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.741 18:16:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.741 ************************************ 00:05:50.741 END TEST app_repeat 00:05:50.741 ************************************ 00:05:50.741 18:16:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:50.741 18:16:19 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:50.741 18:16:19 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.741 18:16:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.741 18:16:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.741 ************************************ 00:05:50.741 START TEST cpu_locks 00:05:50.741 ************************************ 00:05:50.741 18:16:19 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:50.741 * Looking for test storage... 00:05:50.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:50.741 18:16:19 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:50.741 18:16:19 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:50.741 18:16:19 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:51.000 18:16:19 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.000 18:16:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:51.000 18:16:19 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.000 18:16:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:51.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.000 --rc genhtml_branch_coverage=1 00:05:51.000 --rc genhtml_function_coverage=1 00:05:51.000 --rc genhtml_legend=1 00:05:51.000 --rc geninfo_all_blocks=1 00:05:51.000 --rc geninfo_unexecuted_blocks=1 00:05:51.000 00:05:51.000 ' 00:05:51.000 18:16:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:51.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.001 --rc genhtml_branch_coverage=1 00:05:51.001 --rc genhtml_function_coverage=1 00:05:51.001 --rc genhtml_legend=1 00:05:51.001 --rc geninfo_all_blocks=1 00:05:51.001 --rc geninfo_unexecuted_blocks=1 00:05:51.001 00:05:51.001 ' 00:05:51.001 18:16:19 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:51.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.001 --rc genhtml_branch_coverage=1 00:05:51.001 --rc genhtml_function_coverage=1 00:05:51.001 --rc genhtml_legend=1 00:05:51.001 --rc geninfo_all_blocks=1 00:05:51.001 --rc geninfo_unexecuted_blocks=1 00:05:51.001 00:05:51.001 ' 00:05:51.001 18:16:19 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:51.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.001 --rc genhtml_branch_coverage=1 00:05:51.001 --rc genhtml_function_coverage=1 00:05:51.001 --rc genhtml_legend=1 00:05:51.001 --rc geninfo_all_blocks=1 00:05:51.001 --rc geninfo_unexecuted_blocks=1 00:05:51.001 00:05:51.001 ' 00:05:51.001 18:16:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.001 18:16:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.001 18:16:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.001 18:16:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.001 18:16:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.001 18:16:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.001 18:16:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.001 ************************************ 00:05:51.001 START TEST default_locks 00:05:51.001 ************************************ 00:05:51.001 18:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:51.001 18:16:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1070128 00:05:51.001 18:16:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.001 18:16:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1070128 00:05:51.001 18:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1070128 ']' 00:05:51.001 18:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.001 18:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.001 18:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.001 18:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.001 18:16:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.001 [2024-10-08 18:16:19.419617] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:05:51.001 [2024-10-08 18:16:19.419752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070128 ] 00:05:51.001 [2024-10-08 18:16:19.514941] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.260 [2024-10-08 18:16:19.726303] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.199 18:16:20 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.199 18:16:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:52.199 18:16:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1070128 00:05:52.199 18:16:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1070128 00:05:52.199 18:16:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.457 lslocks: write error 00:05:52.457 18:16:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1070128 00:05:52.457 18:16:20 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1070128 ']' 00:05:52.457 18:16:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1070128 00:05:52.457 18:16:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:52.457 18:16:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.457 18:16:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1070128 00:05:52.457 18:16:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.457 18:16:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.457 18:16:20 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1070128' 00:05:52.457 killing process with pid 1070128 00:05:52.457 18:16:20 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1070128 00:05:52.457 18:16:20 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1070128 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1070128 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1070128 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1070128 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1070128 ']' 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1070128) - No such process 00:05:53.395 ERROR: process (pid: 1070128) is no longer running 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.395 00:05:53.395 real 0m2.285s 00:05:53.395 user 0m2.440s 00:05:53.395 sys 0m0.834s 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.395 18:16:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.395 ************************************ 00:05:53.395 END TEST default_locks 00:05:53.395 ************************************ 00:05:53.395 18:16:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:53.395 18:16:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.395 18:16:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.395 18:16:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.395 ************************************ 00:05:53.395 START TEST default_locks_via_rpc 00:05:53.395 ************************************ 00:05:53.395 18:16:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:53.395 18:16:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1070424 00:05:53.395 18:16:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.395 18:16:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1070424 00:05:53.395 18:16:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1070424 ']' 00:05:53.395 18:16:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.395 18:16:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.395 18:16:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.395 18:16:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.395 18:16:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.395 [2024-10-08 18:16:21.757082] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:05:53.395 [2024-10-08 18:16:21.757188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070424 ] 00:05:53.395 [2024-10-08 18:16:21.866781] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.657 [2024-10-08 18:16:22.083858] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1070424 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1070424 00:05:54.228 18:16:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.166 18:16:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1070424 00:05:55.166 18:16:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1070424 ']' 00:05:55.166 18:16:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1070424 00:05:55.166 18:16:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:55.166 18:16:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.166 18:16:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1070424 00:05:55.166 18:16:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.166 18:16:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.166 18:16:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1070424' 00:05:55.166 killing process with pid 1070424 00:05:55.166 18:16:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1070424 00:05:55.166 18:16:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1070424 00:05:55.738 00:05:55.738 real 0m2.423s 00:05:55.738 user 0m2.509s 00:05:55.738 sys 0m1.045s 00:05:55.738 18:16:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.738 18:16:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.738 ************************************ 00:05:55.738 END TEST default_locks_via_rpc 00:05:55.738 ************************************ 00:05:55.738 18:16:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:55.738 18:16:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.738 18:16:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.738 18:16:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.738 ************************************ 00:05:55.738 START TEST non_locking_app_on_locked_coremask 00:05:55.738 ************************************ 00:05:55.738 18:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:55.738 18:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1070779 00:05:55.738 18:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.738 18:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1070779 /var/tmp/spdk.sock 00:05:55.738 18:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1070779 ']' 00:05:55.738 18:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.738 18:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.738 18:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.738 18:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.738 18:16:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.738 [2024-10-08 18:16:24.231856] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:05:55.738 [2024-10-08 18:16:24.231964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070779 ] 00:05:55.999 [2024-10-08 18:16:24.337612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.259 [2024-10-08 18:16:24.554107] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.200 18:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.200 18:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:57.200 18:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1070978 00:05:57.200 18:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:57.200 18:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1070978 /var/tmp/spdk2.sock 00:05:57.200 18:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1070978 ']' 00:05:57.200 18:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.200 18:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.200 18:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.200 18:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.200 18:16:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.200 [2024-10-08 18:16:25.572100] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:05:57.200 [2024-10-08 18:16:25.572220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070978 ] 00:05:57.460 [2024-10-08 18:16:25.760037] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.460 [2024-10-08 18:16:25.760125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.722 [2024-10-08 18:16:26.206761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.671 18:16:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.671 18:16:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:58.671 18:16:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1070779 00:05:58.671 18:16:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1070779 00:05:58.671 18:16:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.103 lslocks: write error 00:06:00.103 18:16:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1070779 00:06:00.103 18:16:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1070779 ']' 00:06:00.103 18:16:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1070779 00:06:00.103 18:16:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.103 18:16:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.103 18:16:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1070779 00:06:00.103 18:16:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.103 18:16:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.103 18:16:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1070779' 00:06:00.103 killing process with pid 1070779 00:06:00.103 18:16:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1070779 00:06:00.103 18:16:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1070779 00:06:01.483 18:16:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1070978 00:06:01.483 18:16:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1070978 ']' 00:06:01.483 18:16:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1070978 00:06:01.483 18:16:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:01.483 18:16:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.483 18:16:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1070978 00:06:01.483 18:16:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.483 18:16:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.483 18:16:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1070978' 00:06:01.483 killing process with pid 1070978 00:06:01.483 18:16:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1070978 00:06:01.483 18:16:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1070978 00:06:02.422 00:06:02.422 real 0m6.468s 00:06:02.422 user 0m6.962s 00:06:02.422 sys 0m2.110s 00:06:02.422 18:16:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.422 18:16:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.422 ************************************ 00:06:02.422 END TEST non_locking_app_on_locked_coremask 00:06:02.422 ************************************ 00:06:02.422 18:16:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:02.422 18:16:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.422 18:16:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.422 18:16:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.422 ************************************ 00:06:02.422 START TEST locking_app_on_unlocked_coremask 00:06:02.422 ************************************ 00:06:02.422 18:16:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:02.422 18:16:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1071551 00:06:02.422 18:16:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:02.422 18:16:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1071551 /var/tmp/spdk.sock 00:06:02.422 18:16:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1071551 ']' 00:06:02.422 18:16:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.422 18:16:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.422 18:16:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.422 18:16:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.422 18:16:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.422 [2024-10-08 18:16:30.768149] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:06:02.422 [2024-10-08 18:16:30.768256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1071551 ] 00:06:02.422 [2024-10-08 18:16:30.874857] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.422 [2024-10-08 18:16:30.874941] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.682 [2024-10-08 18:16:31.092796] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.255 18:16:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.255 18:16:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:03.255 18:16:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1071685 00:06:03.255 18:16:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1071685 /var/tmp/spdk2.sock 00:06:03.255 18:16:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.255 18:16:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1071685 ']' 00:06:03.255 18:16:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.255 18:16:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.255 18:16:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.255 18:16:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.255 18:16:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.255 [2024-10-08 18:16:31.686630] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:06:03.255 [2024-10-08 18:16:31.686854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1071685 ] 00:06:03.516 [2024-10-08 18:16:31.913022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.085 [2024-10-08 18:16:32.357278] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.025 18:16:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.025 18:16:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:05.025 18:16:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1071685 00:06:05.025 18:16:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1071685 00:06:05.025 18:16:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.408 lslocks: write error 00:06:06.408 18:16:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1071551 00:06:06.408 18:16:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1071551 ']' 00:06:06.408 18:16:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1071551 00:06:06.408 18:16:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:06.408 18:16:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.408 18:16:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1071551 00:06:06.408 18:16:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.408 18:16:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.408 18:16:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1071551' 00:06:06.408 killing process with pid 1071551 00:06:06.408 18:16:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1071551 00:06:06.408 18:16:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1071551 00:06:07.789 18:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1071685 00:06:07.789 18:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1071685 ']' 00:06:07.789 18:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1071685 00:06:07.789 18:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:07.789 18:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.789 18:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1071685 00:06:07.789 18:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.789 18:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.789 18:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1071685' 00:06:07.789 killing process with pid 1071685 00:06:07.789 18:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1071685 00:06:07.789 18:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1071685 00:06:08.727 00:06:08.727 real 0m6.248s 00:06:08.727 user 0m6.685s 00:06:08.727 sys 0m1.994s 00:06:08.727 18:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.727 18:16:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.727 ************************************ 00:06:08.727 END TEST locking_app_on_unlocked_coremask 00:06:08.727 ************************************ 00:06:08.727 18:16:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:08.727 18:16:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.727 18:16:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.727 18:16:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.727 ************************************ 00:06:08.727 START TEST locking_app_on_locked_coremask 00:06:08.727 ************************************ 00:06:08.727 18:16:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:08.727 18:16:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1072374 00:06:08.727 18:16:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.727 18:16:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1072374 /var/tmp/spdk.sock 00:06:08.727 18:16:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1072374 ']' 00:06:08.727 18:16:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.727 18:16:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.727 18:16:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.727 18:16:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.727 18:16:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.727 [2024-10-08 18:16:37.144307] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:06:08.727 [2024-10-08 18:16:37.144483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072374 ] 00:06:08.986 [2024-10-08 18:16:37.283474] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.986 [2024-10-08 18:16:37.500248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1072522 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1072522 /var/tmp/spdk2.sock 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1072522 /var/tmp/spdk2.sock 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1072522 /var/tmp/spdk2.sock 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1072522 ']' 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.380 18:16:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.380 [2024-10-08 18:16:38.654068] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:06:10.380 [2024-10-08 18:16:38.654177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072522 ] 00:06:10.380 [2024-10-08 18:16:38.835570] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1072374 has claimed it. 00:06:10.380 [2024-10-08 18:16:38.835721] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:11.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1072522) - No such process 00:06:11.317 ERROR: process (pid: 1072522) is no longer running 00:06:11.317 18:16:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.317 18:16:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:11.317 18:16:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:11.317 18:16:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.317 18:16:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.317 18:16:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.317 18:16:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1072374 00:06:11.317 18:16:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1072374 00:06:11.317 18:16:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.254 lslocks: write error 00:06:12.254 18:16:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1072374 00:06:12.254 18:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1072374 ']' 00:06:12.254 18:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1072374 00:06:12.254 18:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:12.254 18:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.254 18:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1072374 00:06:12.254 18:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.254 18:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.254 18:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1072374' 00:06:12.254 killing process with pid 1072374 00:06:12.254 18:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1072374 00:06:12.254 18:16:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1072374 00:06:12.823 00:06:12.823 real 0m4.154s 00:06:12.823 user 0m5.187s 00:06:12.823 sys 0m1.247s 00:06:12.823 18:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.823 18:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.823 ************************************ 00:06:12.823 END TEST locking_app_on_locked_coremask 00:06:12.823 ************************************ 00:06:12.823 18:16:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:12.823 18:16:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.823 18:16:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.823 18:16:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.823 ************************************ 00:06:12.823 START TEST locking_overlapped_coremask 00:06:12.823 ************************************ 00:06:12.823 18:16:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:12.823 18:16:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1072817 00:06:12.823 18:16:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:12.823 18:16:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1072817 /var/tmp/spdk.sock 00:06:12.823 18:16:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1072817 ']' 00:06:12.823 18:16:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.823 18:16:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.823 18:16:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.824 18:16:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.824 18:16:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.824 [2024-10-08 18:16:41.297512] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:06:12.824 [2024-10-08 18:16:41.297627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072817 ] 00:06:13.082 [2024-10-08 18:16:41.402343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.342 [2024-10-08 18:16:41.619929] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.342 [2024-10-08 18:16:41.620050] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.342 [2024-10-08 18:16:41.620061] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1072961 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1072961 /var/tmp/spdk2.sock 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1072961 /var/tmp/spdk2.sock 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1072961 /var/tmp/spdk2.sock 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1072961 ']' 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.282 18:16:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.282 [2024-10-08 18:16:42.676525] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:06:14.282 [2024-10-08 18:16:42.676675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072961 ] 00:06:14.542 [2024-10-08 18:16:42.854901] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1072817 has claimed it. 00:06:14.542 [2024-10-08 18:16:42.854972] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1072961) - No such process 00:06:15.481 ERROR: process (pid: 1072961) is no longer running 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1072817 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1072817 ']' 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1072817 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1072817 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1072817' 00:06:15.481 killing process with pid 1072817 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1072817 00:06:15.481 18:16:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1072817 00:06:16.050 00:06:16.050 real 0m3.165s 00:06:16.050 user 0m8.997s 00:06:16.050 sys 0m0.824s 00:06:16.050 18:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.050 18:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.050 ************************************ 00:06:16.050 END TEST locking_overlapped_coremask 00:06:16.050 ************************************ 00:06:16.050 18:16:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:16.050 18:16:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.050 18:16:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.050 18:16:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.050 ************************************ 00:06:16.050 START TEST locking_overlapped_coremask_via_rpc 00:06:16.050 ************************************ 00:06:16.050 18:16:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:16.050 18:16:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1073255 00:06:16.050 18:16:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:16.050 18:16:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1073255 /var/tmp/spdk.sock 00:06:16.050 18:16:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1073255 ']' 00:06:16.050 18:16:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.050 18:16:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.050 18:16:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.050 18:16:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.050 18:16:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.050 [2024-10-08 18:16:44.524671] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:06:16.050 [2024-10-08 18:16:44.524778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1073255 ] 00:06:16.310 [2024-10-08 18:16:44.633071] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.310 [2024-10-08 18:16:44.633172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.569 [2024-10-08 18:16:44.855743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.569 [2024-10-08 18:16:44.855811] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.569 [2024-10-08 18:16:44.855824] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.135 18:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.135 18:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.135 18:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1073394 00:06:17.135 18:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:17.135 18:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1073394 /var/tmp/spdk2.sock 00:06:17.135 18:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1073394 ']' 00:06:17.136 18:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.136 18:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.136 18:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.136 18:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.136 18:16:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.395 [2024-10-08 18:16:45.706844] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:06:17.395 [2024-10-08 18:16:45.707018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1073394 ] 00:06:17.395 [2024-10-08 18:16:45.889520] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.395 [2024-10-08 18:16:45.889619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.962 [2024-10-08 18:16:46.289744] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.963 [2024-10-08 18:16:46.293714] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:06:17.963 [2024-10-08 18:16:46.293717] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.900 [2024-10-08 18:16:47.109752] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1073255 has claimed it. 00:06:18.900 request: 00:06:18.900 { 00:06:18.900 "method": "framework_enable_cpumask_locks", 00:06:18.900 "req_id": 1 00:06:18.900 } 00:06:18.900 Got JSON-RPC error response 00:06:18.900 response: 00:06:18.900 { 00:06:18.900 "code": -32603, 00:06:18.900 "message": "Failed to claim CPU core: 2" 00:06:18.900 } 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1073255 /var/tmp/spdk.sock 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1073255 ']' 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.900 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.469 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.469 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:19.469 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1073394 /var/tmp/spdk2.sock 00:06:19.469 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1073394 ']' 00:06:19.469 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.469 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.469 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.469 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.469 18:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.727 18:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.727 18:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:19.727 18:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:19.727 18:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.727 18:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.727 18:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.727 00:06:19.727 real 0m3.753s 00:06:19.727 user 0m2.326s 00:06:19.727 sys 0m0.322s 00:06:19.727 18:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.727 18:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.727 ************************************ 00:06:19.727 END TEST locking_overlapped_coremask_via_rpc 00:06:19.727 ************************************ 00:06:19.727 18:16:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:19.727 18:16:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1073255 ]] 00:06:19.727 18:16:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1073255 00:06:19.727 18:16:48 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1073255 ']' 00:06:19.727 18:16:48 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1073255 00:06:19.727 18:16:48 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:19.727 18:16:48 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.727 18:16:48 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1073255 00:06:19.988 18:16:48 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.988 18:16:48 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.988 18:16:48 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1073255' 00:06:19.988 killing process with pid 1073255 00:06:19.988 18:16:48 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1073255 00:06:19.988 18:16:48 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1073255 00:06:20.557 18:16:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1073394 ]] 00:06:20.557 18:16:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1073394 00:06:20.557 18:16:48 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1073394 ']' 00:06:20.557 18:16:48 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1073394 00:06:20.557 18:16:48 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:20.557 18:16:48 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.557 18:16:48 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1073394 00:06:20.557 18:16:48 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:20.557 18:16:48 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:20.557 18:16:48 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1073394' 00:06:20.557 killing process with pid 1073394 00:06:20.557 18:16:48 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1073394 00:06:20.557 18:16:48 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1073394 00:06:21.126 18:16:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.126 18:16:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:21.126 18:16:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1073255 ]] 00:06:21.126 18:16:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1073255 00:06:21.126 18:16:49 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1073255 ']' 00:06:21.126 18:16:49 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1073255 00:06:21.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1073255) - No such process 00:06:21.126 18:16:49 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1073255 is not found' 00:06:21.126 Process with pid 1073255 is not found 00:06:21.126 18:16:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1073394 ]] 00:06:21.126 18:16:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1073394 00:06:21.126 18:16:49 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1073394 ']' 00:06:21.126 18:16:49 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1073394 00:06:21.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1073394) - No such process 00:06:21.126 18:16:49 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1073394 is not found' 00:06:21.126 Process with pid 1073394 is not found 00:06:21.126 18:16:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.126 00:06:21.126 real 0m30.358s 00:06:21.126 user 0m53.017s 00:06:21.126 sys 0m9.710s 00:06:21.126 18:16:49 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.126 18:16:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.126 ************************************ 00:06:21.126 END TEST cpu_locks 00:06:21.126 ************************************ 00:06:21.126 00:06:21.126 real 1m6.050s 00:06:21.126 user 2m5.773s 00:06:21.126 sys 0m16.767s 00:06:21.126 18:16:49 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.126 18:16:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.126 ************************************ 00:06:21.126 END TEST event 00:06:21.126 ************************************ 00:06:21.126 18:16:49 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:21.126 18:16:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.126 18:16:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.126 18:16:49 -- common/autotest_common.sh@10 -- # set +x 00:06:21.126 ************************************ 00:06:21.126 START TEST thread 00:06:21.126 ************************************ 00:06:21.126 18:16:49 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:21.126 * Looking for test storage... 00:06:21.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:21.126 18:16:49 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:21.126 18:16:49 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:21.126 18:16:49 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:21.386 18:16:49 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:21.386 18:16:49 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.386 18:16:49 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.386 18:16:49 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.386 18:16:49 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.386 18:16:49 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.386 18:16:49 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.386 18:16:49 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.386 18:16:49 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.386 18:16:49 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.386 18:16:49 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.386 18:16:49 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.386 18:16:49 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:21.386 18:16:49 thread -- scripts/common.sh@345 -- # : 1 00:06:21.386 18:16:49 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.386 18:16:49 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.386 18:16:49 thread -- scripts/common.sh@365 -- # decimal 1 00:06:21.386 18:16:49 thread -- scripts/common.sh@353 -- # local d=1 00:06:21.386 18:16:49 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.386 18:16:49 thread -- scripts/common.sh@355 -- # echo 1 00:06:21.387 18:16:49 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.387 18:16:49 thread -- scripts/common.sh@366 -- # decimal 2 00:06:21.387 18:16:49 thread -- scripts/common.sh@353 -- # local d=2 00:06:21.387 18:16:49 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.387 18:16:49 thread -- scripts/common.sh@355 -- # echo 2 00:06:21.387 18:16:49 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.387 18:16:49 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.387 18:16:49 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.387 18:16:49 thread -- scripts/common.sh@368 -- # return 0 00:06:21.387 18:16:49 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.387 18:16:49 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:21.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.387 --rc genhtml_branch_coverage=1 00:06:21.387 --rc genhtml_function_coverage=1 00:06:21.387 --rc genhtml_legend=1 00:06:21.387 --rc geninfo_all_blocks=1 00:06:21.387 --rc geninfo_unexecuted_blocks=1 00:06:21.387 00:06:21.387 ' 00:06:21.387 18:16:49 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:21.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.387 --rc genhtml_branch_coverage=1 00:06:21.387 --rc genhtml_function_coverage=1 00:06:21.387 --rc genhtml_legend=1 00:06:21.387 --rc geninfo_all_blocks=1 00:06:21.387 --rc geninfo_unexecuted_blocks=1 00:06:21.387 00:06:21.387 ' 00:06:21.387 18:16:49 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:21.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.387 --rc genhtml_branch_coverage=1 00:06:21.387 --rc genhtml_function_coverage=1 00:06:21.387 --rc genhtml_legend=1 00:06:21.387 --rc geninfo_all_blocks=1 00:06:21.387 --rc geninfo_unexecuted_blocks=1 00:06:21.387 00:06:21.387 ' 00:06:21.387 18:16:49 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:21.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.387 --rc genhtml_branch_coverage=1 00:06:21.387 --rc genhtml_function_coverage=1 00:06:21.387 --rc genhtml_legend=1 00:06:21.387 --rc geninfo_all_blocks=1 00:06:21.387 --rc geninfo_unexecuted_blocks=1 00:06:21.387 00:06:21.387 ' 00:06:21.387 18:16:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.387 18:16:49 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:21.387 18:16:49 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.387 18:16:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.387 ************************************ 00:06:21.387 START TEST thread_poller_perf 00:06:21.387 ************************************ 00:06:21.387 18:16:49 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.387 [2024-10-08 18:16:49.828477] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:06:21.387 [2024-10-08 18:16:49.828545] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1074020 ] 00:06:21.387 [2024-10-08 18:16:49.896679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.645 [2024-10-08 18:16:50.068478] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.645 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:23.027 [2024-10-08T16:16:51.564Z] ====================================== 00:06:23.027 [2024-10-08T16:16:51.564Z] busy:2731960872 (cyc) 00:06:23.027 [2024-10-08T16:16:51.564Z] total_run_count: 149000 00:06:23.027 [2024-10-08T16:16:51.564Z] tsc_hz: 2700000000 (cyc) 00:06:23.027 [2024-10-08T16:16:51.564Z] ====================================== 00:06:23.027 [2024-10-08T16:16:51.564Z] poller_cost: 18335 (cyc), 6790 (nsec) 00:06:23.027 00:06:23.027 real 0m1.468s 00:06:23.027 user 0m1.353s 00:06:23.027 sys 0m0.104s 00:06:23.027 18:16:51 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.027 18:16:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.027 ************************************ 00:06:23.027 END TEST thread_poller_perf 00:06:23.027 ************************************ 00:06:23.027 18:16:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.027 18:16:51 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:23.027 18:16:51 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.027 18:16:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.027 ************************************ 00:06:23.027 START TEST thread_poller_perf 00:06:23.027 ************************************ 00:06:23.027 18:16:51 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.027 [2024-10-08 18:16:51.368263] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:06:23.027 [2024-10-08 18:16:51.368328] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1074177 ] 00:06:23.027 [2024-10-08 18:16:51.468719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.287 [2024-10-08 18:16:51.700236] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.287 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:24.663 [2024-10-08T16:16:53.200Z] ====================================== 00:06:24.663 [2024-10-08T16:16:53.200Z] busy:2706313005 (cyc) 00:06:24.663 [2024-10-08T16:16:53.200Z] total_run_count: 1732000 00:06:24.663 [2024-10-08T16:16:53.200Z] tsc_hz: 2700000000 (cyc) 00:06:24.663 [2024-10-08T16:16:53.200Z] ====================================== 00:06:24.663 [2024-10-08T16:16:53.200Z] poller_cost: 1562 (cyc), 578 (nsec) 00:06:24.663 00:06:24.663 real 0m1.551s 00:06:24.663 user 0m1.396s 00:06:24.663 sys 0m0.140s 00:06:24.663 18:16:52 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.663 18:16:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.663 ************************************ 00:06:24.663 END TEST thread_poller_perf 00:06:24.663 ************************************ 00:06:24.663 18:16:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:24.663 00:06:24.663 real 0m3.424s 00:06:24.663 user 0m2.991s 00:06:24.663 sys 0m0.425s 00:06:24.663 18:16:52 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.663 18:16:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.663 ************************************ 00:06:24.663 END TEST thread 00:06:24.663 ************************************ 00:06:24.663 18:16:52 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:24.663 18:16:52 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:24.663 18:16:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.663 18:16:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.664 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:06:24.664 ************************************ 00:06:24.664 START TEST app_cmdline 00:06:24.664 ************************************ 00:06:24.664 18:16:52 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:24.664 * Looking for test storage... 00:06:24.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:24.664 18:16:53 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:24.664 18:16:53 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:24.664 18:16:53 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:24.664 18:16:53 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.664 18:16:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.922 18:16:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:24.922 18:16:53 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.922 18:16:53 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:24.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.922 --rc genhtml_branch_coverage=1 00:06:24.922 --rc genhtml_function_coverage=1 00:06:24.922 --rc genhtml_legend=1 00:06:24.922 --rc geninfo_all_blocks=1 00:06:24.922 --rc geninfo_unexecuted_blocks=1 00:06:24.922 00:06:24.922 ' 00:06:24.922 18:16:53 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:24.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.922 --rc genhtml_branch_coverage=1 00:06:24.922 --rc genhtml_function_coverage=1 00:06:24.922 --rc genhtml_legend=1 00:06:24.922 --rc geninfo_all_blocks=1 00:06:24.922 --rc geninfo_unexecuted_blocks=1 00:06:24.922 00:06:24.922 ' 00:06:24.922 18:16:53 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:24.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.922 --rc genhtml_branch_coverage=1 00:06:24.922 --rc genhtml_function_coverage=1 00:06:24.922 --rc genhtml_legend=1 00:06:24.922 --rc geninfo_all_blocks=1 00:06:24.922 --rc geninfo_unexecuted_blocks=1 00:06:24.922 00:06:24.922 ' 00:06:24.922 18:16:53 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:24.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.922 --rc genhtml_branch_coverage=1 00:06:24.922 --rc genhtml_function_coverage=1 00:06:24.922 --rc genhtml_legend=1 00:06:24.922 --rc geninfo_all_blocks=1 00:06:24.922 --rc geninfo_unexecuted_blocks=1 00:06:24.922 00:06:24.922 ' 00:06:24.922 18:16:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:24.922 18:16:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1074455 00:06:24.922 18:16:53 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:24.922 18:16:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1074455 00:06:24.922 18:16:53 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1074455 ']' 00:06:24.922 18:16:53 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.922 18:16:53 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.922 18:16:53 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.922 18:16:53 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.922 18:16:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.922 [2024-10-08 18:16:53.316746] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:06:24.922 [2024-10-08 18:16:53.316857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1074455 ] 00:06:24.922 [2024-10-08 18:16:53.457835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.181 [2024-10-08 18:16:53.673498] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.616 18:16:54 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.616 18:16:54 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:26.617 18:16:54 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:26.617 { 00:06:26.617 "version": "SPDK v25.01-pre git sha1 865972bb6", 00:06:26.617 "fields": { 00:06:26.617 "major": 25, 00:06:26.617 "minor": 1, 00:06:26.617 "patch": 0, 00:06:26.617 "suffix": "-pre", 00:06:26.617 "commit": "865972bb6" 00:06:26.617 } 00:06:26.617 } 00:06:26.617 18:16:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:26.617 18:16:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:26.617 18:16:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:26.617 18:16:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:26.617 18:16:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.617 18:16:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:26.617 18:16:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.617 18:16:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:26.617 18:16:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:26.617 18:16:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:26.617 18:16:55 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.577 request: 00:06:27.577 { 00:06:27.577 "method": "env_dpdk_get_mem_stats", 00:06:27.577 "req_id": 1 00:06:27.577 } 00:06:27.577 Got JSON-RPC error response 00:06:27.577 response: 00:06:27.577 { 00:06:27.577 "code": -32601, 00:06:27.577 "message": "Method not found" 00:06:27.577 } 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.577 18:16:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1074455 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1074455 ']' 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1074455 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1074455 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1074455' 00:06:27.577 killing process with pid 1074455 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@969 -- # kill 1074455 00:06:27.577 18:16:55 app_cmdline -- common/autotest_common.sh@974 -- # wait 1074455 00:06:28.144 00:06:28.144 real 0m3.555s 00:06:28.144 user 0m4.695s 00:06:28.144 sys 0m0.935s 00:06:28.144 18:16:56 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.144 18:16:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.144 ************************************ 00:06:28.144 END TEST app_cmdline 00:06:28.144 ************************************ 00:06:28.144 18:16:56 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:28.144 18:16:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.144 18:16:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.144 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:06:28.144 ************************************ 00:06:28.144 START TEST version 00:06:28.144 ************************************ 00:06:28.144 18:16:56 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:28.144 * Looking for test storage... 00:06:28.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:28.144 18:16:56 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:28.144 18:16:56 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:28.144 18:16:56 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:28.403 18:16:56 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:28.403 18:16:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.403 18:16:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.403 18:16:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.403 18:16:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.403 18:16:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.403 18:16:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.403 18:16:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.403 18:16:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.403 18:16:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.403 18:16:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.403 18:16:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.403 18:16:56 version -- scripts/common.sh@344 -- # case "$op" in 00:06:28.403 18:16:56 version -- scripts/common.sh@345 -- # : 1 00:06:28.403 18:16:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.403 18:16:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.403 18:16:56 version -- scripts/common.sh@365 -- # decimal 1 00:06:28.403 18:16:56 version -- scripts/common.sh@353 -- # local d=1 00:06:28.403 18:16:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.403 18:16:56 version -- scripts/common.sh@355 -- # echo 1 00:06:28.403 18:16:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.403 18:16:56 version -- scripts/common.sh@366 -- # decimal 2 00:06:28.403 18:16:56 version -- scripts/common.sh@353 -- # local d=2 00:06:28.403 18:16:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.403 18:16:56 version -- scripts/common.sh@355 -- # echo 2 00:06:28.403 18:16:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.403 18:16:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.403 18:16:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.403 18:16:56 version -- scripts/common.sh@368 -- # return 0 00:06:28.403 18:16:56 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.403 18:16:56 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:28.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.403 --rc genhtml_branch_coverage=1 00:06:28.403 --rc genhtml_function_coverage=1 00:06:28.403 --rc genhtml_legend=1 00:06:28.403 --rc geninfo_all_blocks=1 00:06:28.403 --rc geninfo_unexecuted_blocks=1 00:06:28.403 00:06:28.403 ' 00:06:28.403 18:16:56 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:28.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.403 --rc genhtml_branch_coverage=1 00:06:28.403 --rc genhtml_function_coverage=1 00:06:28.403 --rc genhtml_legend=1 00:06:28.403 --rc geninfo_all_blocks=1 00:06:28.403 --rc geninfo_unexecuted_blocks=1 00:06:28.403 00:06:28.403 ' 00:06:28.403 18:16:56 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:28.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.403 --rc genhtml_branch_coverage=1 00:06:28.403 --rc genhtml_function_coverage=1 00:06:28.403 --rc genhtml_legend=1 00:06:28.403 --rc geninfo_all_blocks=1 00:06:28.403 --rc geninfo_unexecuted_blocks=1 00:06:28.403 00:06:28.403 ' 00:06:28.403 18:16:56 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:28.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.403 --rc genhtml_branch_coverage=1 00:06:28.403 --rc genhtml_function_coverage=1 00:06:28.403 --rc genhtml_legend=1 00:06:28.403 --rc geninfo_all_blocks=1 00:06:28.403 --rc geninfo_unexecuted_blocks=1 00:06:28.403 00:06:28.403 ' 00:06:28.403 18:16:56 version -- app/version.sh@17 -- # get_header_version major 00:06:28.403 18:16:56 version -- app/version.sh@14 -- # cut -f2 00:06:28.403 18:16:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:28.403 18:16:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.403 18:16:56 version -- app/version.sh@17 -- # major=25 00:06:28.403 18:16:56 version -- app/version.sh@18 -- # get_header_version minor 00:06:28.403 18:16:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:28.403 18:16:56 version -- app/version.sh@14 -- # cut -f2 00:06:28.403 18:16:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.403 18:16:56 version -- app/version.sh@18 -- # minor=1 00:06:28.403 18:16:56 version -- app/version.sh@19 -- # get_header_version patch 00:06:28.403 18:16:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:28.403 18:16:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.403 18:16:56 version -- app/version.sh@14 -- # cut -f2 00:06:28.403 18:16:56 version -- app/version.sh@19 -- # patch=0 00:06:28.403 18:16:56 version -- app/version.sh@20 -- # get_header_version suffix 00:06:28.403 18:16:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:28.403 18:16:56 version -- app/version.sh@14 -- # cut -f2 00:06:28.403 18:16:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.403 18:16:56 version -- app/version.sh@20 -- # suffix=-pre 00:06:28.403 18:16:56 version -- app/version.sh@22 -- # version=25.1 00:06:28.403 18:16:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:28.403 18:16:56 version -- app/version.sh@28 -- # version=25.1rc0 00:06:28.403 18:16:56 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:28.403 18:16:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:28.403 18:16:56 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:28.403 18:16:56 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:28.403 00:06:28.403 real 0m0.272s 00:06:28.403 user 0m0.176s 00:06:28.403 sys 0m0.131s 00:06:28.403 18:16:56 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.403 18:16:56 version -- common/autotest_common.sh@10 -- # set +x 00:06:28.403 ************************************ 00:06:28.403 END TEST version 00:06:28.403 ************************************ 00:06:28.403 18:16:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:28.403 18:16:56 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:28.403 18:16:56 -- spdk/autotest.sh@194 -- # uname -s 00:06:28.403 18:16:56 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:28.403 18:16:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:28.403 18:16:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:28.403 18:16:56 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:28.403 18:16:56 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:28.403 18:16:56 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:28.403 18:16:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.403 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:06:28.662 18:16:56 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:28.662 18:16:56 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:28.662 18:16:56 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:28.662 18:16:56 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:28.662 18:16:56 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:28.662 18:16:56 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:28.662 18:16:56 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:28.662 18:16:56 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:28.662 18:16:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.662 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:06:28.662 ************************************ 00:06:28.662 START TEST nvmf_tcp 00:06:28.662 ************************************ 00:06:28.662 18:16:56 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:28.662 * Looking for test storage... 00:06:28.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:28.662 18:16:57 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:28.662 18:16:57 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:28.662 18:16:57 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:28.662 18:16:57 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.662 18:16:57 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:28.662 18:16:57 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.662 18:16:57 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:28.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.662 --rc genhtml_branch_coverage=1 00:06:28.662 --rc genhtml_function_coverage=1 00:06:28.662 --rc genhtml_legend=1 00:06:28.662 --rc geninfo_all_blocks=1 00:06:28.662 --rc geninfo_unexecuted_blocks=1 00:06:28.662 00:06:28.662 ' 00:06:28.662 18:16:57 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:28.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.662 --rc genhtml_branch_coverage=1 00:06:28.662 --rc genhtml_function_coverage=1 00:06:28.662 --rc genhtml_legend=1 00:06:28.662 --rc geninfo_all_blocks=1 00:06:28.662 --rc geninfo_unexecuted_blocks=1 00:06:28.662 00:06:28.662 ' 00:06:28.662 18:16:57 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:28.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.662 --rc genhtml_branch_coverage=1 00:06:28.662 --rc genhtml_function_coverage=1 00:06:28.662 --rc genhtml_legend=1 00:06:28.662 --rc geninfo_all_blocks=1 00:06:28.662 --rc geninfo_unexecuted_blocks=1 00:06:28.662 00:06:28.662 ' 00:06:28.662 18:16:57 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:28.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.662 --rc genhtml_branch_coverage=1 00:06:28.662 --rc genhtml_function_coverage=1 00:06:28.662 --rc genhtml_legend=1 00:06:28.662 --rc geninfo_all_blocks=1 00:06:28.662 --rc geninfo_unexecuted_blocks=1 00:06:28.662 00:06:28.662 ' 00:06:28.662 18:16:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:28.662 18:16:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:28.662 18:16:57 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:28.663 18:16:57 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:28.663 18:16:57 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.663 18:16:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.921 ************************************ 00:06:28.921 START TEST nvmf_target_core 00:06:28.922 ************************************ 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:28.922 * Looking for test storage... 00:06:28.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:28.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.922 --rc genhtml_branch_coverage=1 00:06:28.922 --rc genhtml_function_coverage=1 00:06:28.922 --rc genhtml_legend=1 00:06:28.922 --rc geninfo_all_blocks=1 00:06:28.922 --rc geninfo_unexecuted_blocks=1 00:06:28.922 00:06:28.922 ' 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:28.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.922 --rc genhtml_branch_coverage=1 00:06:28.922 --rc genhtml_function_coverage=1 00:06:28.922 --rc genhtml_legend=1 00:06:28.922 --rc geninfo_all_blocks=1 00:06:28.922 --rc geninfo_unexecuted_blocks=1 00:06:28.922 00:06:28.922 ' 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:28.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.922 --rc genhtml_branch_coverage=1 00:06:28.922 --rc genhtml_function_coverage=1 00:06:28.922 --rc genhtml_legend=1 00:06:28.922 --rc geninfo_all_blocks=1 00:06:28.922 --rc geninfo_unexecuted_blocks=1 00:06:28.922 00:06:28.922 ' 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:28.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.922 --rc genhtml_branch_coverage=1 00:06:28.922 --rc genhtml_function_coverage=1 00:06:28.922 --rc genhtml_legend=1 00:06:28.922 --rc geninfo_all_blocks=1 00:06:28.922 --rc geninfo_unexecuted_blocks=1 00:06:28.922 00:06:28.922 ' 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:28.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:28.922 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:28.923 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:28.923 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:28.923 18:16:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:28.923 18:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:28.923 18:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.923 18:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:28.923 ************************************ 00:06:28.923 START TEST nvmf_abort 00:06:28.923 ************************************ 00:06:28.923 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:29.182 * Looking for test storage... 00:06:29.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.182 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:29.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.182 --rc genhtml_branch_coverage=1 00:06:29.182 --rc genhtml_function_coverage=1 00:06:29.182 --rc genhtml_legend=1 00:06:29.182 --rc geninfo_all_blocks=1 00:06:29.182 --rc geninfo_unexecuted_blocks=1 00:06:29.182 00:06:29.183 ' 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:29.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.183 --rc genhtml_branch_coverage=1 00:06:29.183 --rc genhtml_function_coverage=1 00:06:29.183 --rc genhtml_legend=1 00:06:29.183 --rc geninfo_all_blocks=1 00:06:29.183 --rc geninfo_unexecuted_blocks=1 00:06:29.183 00:06:29.183 ' 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:29.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.183 --rc genhtml_branch_coverage=1 00:06:29.183 --rc genhtml_function_coverage=1 00:06:29.183 --rc genhtml_legend=1 00:06:29.183 --rc geninfo_all_blocks=1 00:06:29.183 --rc geninfo_unexecuted_blocks=1 00:06:29.183 00:06:29.183 ' 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:29.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.183 --rc genhtml_branch_coverage=1 00:06:29.183 --rc genhtml_function_coverage=1 00:06:29.183 --rc genhtml_legend=1 00:06:29.183 --rc geninfo_all_blocks=1 00:06:29.183 --rc geninfo_unexecuted_blocks=1 00:06:29.183 00:06:29.183 ' 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:29.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:29.183 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:32.470 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:32.470 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:32.470 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:32.471 Found net devices under 0000:84:00.0: cvl_0_0 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:32.471 Found net devices under 0000:84:00.1: cvl_0_1 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:32.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:06:32.471 00:06:32.471 --- 10.0.0.2 ping statistics --- 00:06:32.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.471 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:32.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:06:32.471 00:06:32.471 --- 10.0.0.1 ping statistics --- 00:06:32.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.471 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1076883 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1076883 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1076883 ']' 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.471 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.471 [2024-10-08 18:17:00.659377] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:06:32.471 [2024-10-08 18:17:00.659478] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.471 [2024-10-08 18:17:00.762879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.471 [2024-10-08 18:17:00.984241] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.471 [2024-10-08 18:17:00.984353] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.471 [2024-10-08 18:17:00.984390] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.471 [2024-10-08 18:17:00.984421] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.471 [2024-10-08 18:17:00.984450] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.471 [2024-10-08 18:17:00.986558] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.471 [2024-10-08 18:17:00.986674] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.471 [2024-10-08 18:17:00.986680] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.729 [2024-10-08 18:17:01.212901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.729 Malloc0 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.729 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.987 Delay0 00:06:32.987 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.987 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:32.987 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.987 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.987 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.987 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:32.987 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.988 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.988 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.988 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:32.988 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.988 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.988 [2024-10-08 18:17:01.290207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.988 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.988 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:32.988 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.988 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.988 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.988 18:17:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:32.988 [2024-10-08 18:17:01.395537] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:35.518 Initializing NVMe Controllers 00:06:35.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:35.518 controller IO queue size 128 less than required 00:06:35.518 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:35.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:35.518 Initialization complete. Launching workers. 00:06:35.518 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27463 00:06:35.518 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27524, failed to submit 62 00:06:35.518 success 27467, unsuccessful 57, failed 0 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:35.518 rmmod nvme_tcp 00:06:35.518 rmmod nvme_fabrics 00:06:35.518 rmmod nvme_keyring 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1076883 ']' 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1076883 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1076883 ']' 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1076883 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1076883 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1076883' 00:06:35.518 killing process with pid 1076883 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1076883 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1076883 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.518 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:38.134 00:06:38.134 real 0m8.642s 00:06:38.134 user 0m11.575s 00:06:38.134 sys 0m3.371s 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.134 ************************************ 00:06:38.134 END TEST nvmf_abort 00:06:38.134 ************************************ 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.134 ************************************ 00:06:38.134 START TEST nvmf_ns_hotplug_stress 00:06:38.134 ************************************ 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:38.134 * Looking for test storage... 00:06:38.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.134 --rc genhtml_branch_coverage=1 00:06:38.134 --rc genhtml_function_coverage=1 00:06:38.134 --rc genhtml_legend=1 00:06:38.134 --rc geninfo_all_blocks=1 00:06:38.134 --rc geninfo_unexecuted_blocks=1 00:06:38.134 00:06:38.134 ' 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.134 --rc genhtml_branch_coverage=1 00:06:38.134 --rc genhtml_function_coverage=1 00:06:38.134 --rc genhtml_legend=1 00:06:38.134 --rc geninfo_all_blocks=1 00:06:38.134 --rc geninfo_unexecuted_blocks=1 00:06:38.134 00:06:38.134 ' 00:06:38.134 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.134 --rc genhtml_branch_coverage=1 00:06:38.134 --rc genhtml_function_coverage=1 00:06:38.134 --rc genhtml_legend=1 00:06:38.134 --rc geninfo_all_blocks=1 00:06:38.135 --rc geninfo_unexecuted_blocks=1 00:06:38.135 00:06:38.135 ' 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.135 --rc genhtml_branch_coverage=1 00:06:38.135 --rc genhtml_function_coverage=1 00:06:38.135 --rc genhtml_legend=1 00:06:38.135 --rc geninfo_all_blocks=1 00:06:38.135 --rc geninfo_unexecuted_blocks=1 00:06:38.135 00:06:38.135 ' 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:38.135 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:41.431 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:41.432 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:41.432 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:41.432 Found net devices under 0000:84:00.0: cvl_0_0 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:41.432 Found net devices under 0000:84:00.1: cvl_0_1 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:41.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:41.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:06:41.432 00:06:41.432 --- 10.0.0.2 ping statistics --- 00:06:41.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.432 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:41.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:41.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:06:41.432 00:06:41.432 --- 10.0.0.1 ping statistics --- 00:06:41.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.432 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1079289 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1079289 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1079289 ']' 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.432 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:41.432 [2024-10-08 18:17:09.515048] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:06:41.433 [2024-10-08 18:17:09.515148] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.433 [2024-10-08 18:17:09.629881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.433 [2024-10-08 18:17:09.846493] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:41.433 [2024-10-08 18:17:09.846617] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:41.433 [2024-10-08 18:17:09.846672] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:41.433 [2024-10-08 18:17:09.846707] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:41.433 [2024-10-08 18:17:09.846733] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:41.433 [2024-10-08 18:17:09.848885] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.433 [2024-10-08 18:17:09.848983] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.433 [2024-10-08 18:17:09.848986] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.366 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.366 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:42.366 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:42.366 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.366 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.366 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.366 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:42.366 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:42.623 [2024-10-08 18:17:11.005168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.623 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:42.881 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:43.139 [2024-10-08 18:17:11.654832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.139 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:43.704 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:43.962 Malloc0 00:06:43.962 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:44.220 Delay0 00:06:44.220 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.478 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:45.042 NULL1 00:06:45.042 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:45.607 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1079831 00:06:45.607 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:45.607 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:06:45.607 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.540 Read completed with error (sct=0, sc=11) 00:06:46.797 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.055 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:47.055 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:47.620 true 00:06:47.620 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:06:47.621 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.186 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.444 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:48.444 18:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:49.009 true 00:06:49.009 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:06:49.009 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.267 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.267 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.089 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:50.089 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:50.347 true 00:06:50.347 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:06:50.347 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.720 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.720 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.235 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:52.235 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:52.800 true 00:06:52.800 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:06:52.800 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.365 18:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.623 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:53.623 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:54.188 true 00:06:54.188 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:06:54.188 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.753 18:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.011 18:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:55.011 18:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:55.577 true 00:06:55.577 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:06:55.577 18:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.948 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.464 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:57.464 18:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:57.723 true 00:06:57.723 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:06:57.723 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.657 18:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.657 18:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:58.657 18:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:59.223 true 00:06:59.223 18:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:06:59.223 18:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.789 18:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.045 18:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:00.046 18:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:00.302 true 00:07:00.560 18:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:07:00.560 18:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.125 18:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.641 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:01.641 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:02.206 true 00:07:02.206 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:07:02.206 18:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.576 18:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.834 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:03.834 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:04.091 true 00:07:04.091 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:07:04.091 18:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.657 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.225 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:05.225 18:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:05.790 true 00:07:05.790 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:07:05.790 18:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:07.164 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.164 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:07.164 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:07.422 true 00:07:07.680 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:07:07.680 18:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.938 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.195 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:08.195 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:08.453 true 00:07:08.453 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:07:08.453 18:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.019 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.276 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:09.276 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:09.534 true 00:07:09.534 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:07:09.534 18:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.099 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.357 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:10.357 18:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:10.614 true 00:07:10.614 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:07:10.614 18:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:11.988 18:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.503 18:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:12.504 18:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:12.761 true 00:07:12.761 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:07:12.761 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.331 18:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.589 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.857 [2024-10-08 18:17:42.288214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.288327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.288379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.288441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.288495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.288555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.288611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.288693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.288751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.288810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.288878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.288936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.289974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.290953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.291014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.291071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.291131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.291196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.857 [2024-10-08 18:17:42.291257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.291313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.291370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.291430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.291495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.291554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.291608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.291690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.291747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.291808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.291881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.291960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.292037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.292099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.292160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.292223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.292432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.292495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.292551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.292607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.292685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.292745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.292805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.292866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.292926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.293012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.293077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.293136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.293201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.293262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.293330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.293391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.293936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.294976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.295945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.296941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.297025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.297086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.297147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.297206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.297268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.297338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.297403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.297465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.297527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.297587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.297676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.858 [2024-10-08 18:17:42.297741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.297803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.297864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.297927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.298003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.298070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.298289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.298354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.298415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.298482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.298541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.298600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.298685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.298754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.298812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.298885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.298956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.299939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.300932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.301010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.301073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.301132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.301192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.301252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.301318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.302264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.302339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.302399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.302456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.302516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.302575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.302661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.302739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.302801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.302864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.302929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.303924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.304011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.304073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.304133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.304192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.304261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.304328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.304387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.304451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.304520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.859 [2024-10-08 18:17:42.304582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.304671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.304730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.304788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.304844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.304903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.304967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.305960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.306021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.306078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.306138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.306198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.306259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.306319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.306384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.306595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.306683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.306750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.306821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.306884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.306962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.307032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.307099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.307160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.307218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.307278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.307340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.307404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.307475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.307536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.307601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.308932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.309990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.310061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.310123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.310185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.310247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.310308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.310371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.310436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.310500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.310574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.310660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.860 [2024-10-08 18:17:42.310727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.310791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.310855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.310917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.310994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.311053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.311111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.311182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.311239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:13.861 [2024-10-08 18:17:42.311301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.311361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.311425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.311494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.311557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.311618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:13.861 [2024-10-08 18:17:42.311703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.311761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.311824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.311886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.311971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.312034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.312091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.312159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.312217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.312428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.312492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.312550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.312614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.312702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.312772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.312841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.312908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.312978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.313921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.314988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.315070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.315128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.315188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.315249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.315310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.315370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.315428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.316257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.316318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.316383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.316452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.316523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.316581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.316668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.316746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.316810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.316874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.316951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.317026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.317082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.317140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.317200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.317258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.317317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.861 [2024-10-08 18:17:42.317378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.317442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.317513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.317573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.317663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.317741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.317802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.317859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.317918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.318996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.319935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.320031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.320092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.320161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.320228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.320287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.320342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.320397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.320606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.320693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.320763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.320824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.320882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.320958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.321014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.321074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.321129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.321191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.321250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.321310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.321373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.321428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.321487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.321557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.322056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.322122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.322192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.862 [2024-10-08 18:17:42.322251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.322308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.322367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.322426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.322486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.322558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.322619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.322704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.322767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.322828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.322888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.322968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.323937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.324988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.325931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.326012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.326081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.326143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.326344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.326414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.326473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.326534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.326610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.326696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.326761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:13.863 [2024-10-08 18:17:42.326828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.326889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.326951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.327970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.328046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.328114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.328171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.328229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.328292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.328360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.328415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.328467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.863 [2024-10-08 18:17:42.328524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.328593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.328648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.328736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.328798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.328857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.328915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.328988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.329052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.329106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.329158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.329216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.329274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.329332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.330137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.330212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.330288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.330349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.330413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.330481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.330565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.330626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.330706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.330771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.330830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.330894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.330967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.331989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.332947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.333983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.334045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.334103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.334163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.334223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.864 [2024-10-08 18:17:42.334429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.334488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.334547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.334607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.334695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.334763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.334823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.334884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.334938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.335989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.336070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.336126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.336176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.336233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.336291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.336349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.336414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.336484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.337952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.338997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.339934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.340971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.341034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.341248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.341309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.341370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.341439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.865 [2024-10-08 18:17:42.341495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.341554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.341605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.341695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.341756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.341818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.341886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.341944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.342982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.343051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.343108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.343896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.343994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.344981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.345928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.346930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.347971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.348197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.348260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.348321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.348390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.348452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.348513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.348573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.348663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.348735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.866 [2024-10-08 18:17:42.348801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.348864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.348938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.348998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.349921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.350940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.351981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.352057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.352117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.352625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.352717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.352784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.352851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.352917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.352985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.353994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.354048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.354108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.354170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.354230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.354292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.354351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.354410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.354483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.354552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.867 [2024-10-08 18:17:42.354611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.354696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.354762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.354827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.354889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.354958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.355986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.356068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.356132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.356191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.356250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.356314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.356377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.356437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.356501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.356559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.356609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.356694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.356757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.357641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.357727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.357787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.357841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.357901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.357977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.358932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.359981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.360972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.361033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.361095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.361153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.361212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.361269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.361338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.361394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.361451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.361508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.361564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.361623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.361884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.361950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.362026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.362091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.362155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.362212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.868 [2024-10-08 18:17:42.362273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.362333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.362400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.362466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.362525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.362585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.362646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.362734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.362803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.362866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.362931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.363976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.364917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.365952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.366483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.366551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.366615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.366696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.366777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.366841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.366913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.366982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.367975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.368970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.369032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.369091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.369152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.369213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.369292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.869 [2024-10-08 18:17:42.369354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.369417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.369480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.369539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.369604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.369696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.369761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.369827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.369890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.369952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.370049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.370117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.370177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.370242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.370314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.370376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.370435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.370499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.370562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.371997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.372977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.373960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.374990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.375056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.375121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.375178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.375754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.375824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.375882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.375941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.870 [2024-10-08 18:17:42.376859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.376922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.377991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.378991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.379056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.379117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.379174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.379225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.379282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.379340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.379400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.379460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.379518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.379575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.379672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.379737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.380674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.380771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.380863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.380947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.381938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.382936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.383018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.383081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.383142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.383200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.383260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.383320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.383379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.383452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.383515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.383578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.871 [2024-10-08 18:17:42.383669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.383751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.383814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.383877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.383967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.384948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.385168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.385232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.385321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.385411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.385490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:13.872 [2024-10-08 18:17:42.385565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.385626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.385715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.385774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.385843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.385909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.385984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.386040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.386098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.386165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.386224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.386284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.386844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.386911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.386973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.387971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.388973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.389972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.390878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.391979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.392060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.392121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.392180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.154 [2024-10-08 18:17:42.392252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.392321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.392380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.392440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.392500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.392569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.392629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.392713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.392783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.392846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.392904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.392979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.393941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.394014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.394504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.394566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.394623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.394710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.394781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.394842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.394902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.394979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.395981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.396937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.397988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.398055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.398116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.398177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.398238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.398303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.398362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.398423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.398474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.398535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.398760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.398821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.155 [2024-10-08 18:17:42.398887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.398969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.399028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.399092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:14.156 [2024-10-08 18:17:42.399158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.399222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.399279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.399344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.399411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.399462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.399518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.399574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.399630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.399733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.399794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.400476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.400539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.400599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.400685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.400752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.400812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.400875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.400936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.401955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.402945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.403962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.404040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.404106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.404167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.404227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.404290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.404351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.404411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.404474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.404533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.404757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.404824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.404891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.404967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.405941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.406019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.406075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.406139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.406197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.406256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.156 [2024-10-08 18:17:42.406314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.406376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.406434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.406486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.406547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.406606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.406688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.406749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.406805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.406862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.406920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.406981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.407054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.407113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.407174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.407234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.407295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.407354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.407417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.407478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.407536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.408348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.408408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.408464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.408521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.408579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.408643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.408725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.408784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.408842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.408909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.408983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.409992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.410964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.411981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.412058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.412116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.412176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.412234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.412293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.412350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.412555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.412613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.412698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.412760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.412821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.157 [2024-10-08 18:17:42.412884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.412950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.413036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.413099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.413158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.413221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.413284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.413345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.413404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.413463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.413523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.413599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.414994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.415959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.416924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.417982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.418049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.418107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.418168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.418235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.418450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.418509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.418570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.418644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.418715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.418774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.418840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.418895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.418968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.419033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.419091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.158 [2024-10-08 18:17:42.419150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.419215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.419271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.419331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.419388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.419443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.419499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.419555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.419613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.419699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.419757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.419817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.419875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.419939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.420935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.421977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.422037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.422105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.422168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.422229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.422297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.422364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.423172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.423238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.423291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.423347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.423404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.423472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.423534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.423591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.423676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.423743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.423816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.423891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.423969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.424929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.425959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.426025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.426085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.426143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.426209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.426261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.159 [2024-10-08 18:17:42.426319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.426376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.426434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.426490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.426547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.426603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.426684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.426744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.426803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.426861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.426912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.426985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.427970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.428035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.428086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.428146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.428195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.428243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.428734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.428811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.428876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.428938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.429969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.430962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.431998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.432908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.433146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.160 [2024-10-08 18:17:42.433207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.433270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.433327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.433386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.433443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.433499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.433556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.433620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.433703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.433768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.433828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.433889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.433960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.434986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.435944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.436510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.436571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.436628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.436723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.436796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.436858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.436926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.436998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.437945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.438965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.439986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.440040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.440096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.440153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.440210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.161 [2024-10-08 18:17:42.440270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.440330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.440389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.440451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.440509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.440560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.440789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.440850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.440907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.440985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.441048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.441117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.441180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.441238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.441300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.441360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.441423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.441485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.441544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.441603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.441693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.441762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.441826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.442570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.442645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.442723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.442787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.442849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.442909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.442989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.443949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.444934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.445943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.446975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.162 [2024-10-08 18:17:42.447041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.447948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.448927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.449943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.450036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.450094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.450162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.450212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.450272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.450329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.450382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.450442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.450500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.450557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.450612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.450690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.451223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.451287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.451355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.451420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.451479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.451536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.451596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.451681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.451747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.451807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.451868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.451930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.452942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.163 [2024-10-08 18:17:42.453965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.454970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.455068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.455130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.455189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.455248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.455311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.456955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.457999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.458989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.459938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.460923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.461000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.461060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.461119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.461180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.461243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.164 [2024-10-08 18:17:42.461678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.461743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.461815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.461878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.461941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.462977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.463972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.464988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.465042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.465098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.465157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.465217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.465277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.465339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.465398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.465460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.465516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.465573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.465645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.465718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.465778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.466984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.467954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.468033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.468093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.468153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.468212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.468280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.468339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.468397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.468456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.165 [2024-10-08 18:17:42.468515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.468572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.468644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.468737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.468797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.468864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.468951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:14.166 [2024-10-08 18:17:42.469721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.469784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.469844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.469907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.469966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.470983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.471961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.472989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.473062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.473120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.473180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.473251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.473318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.473378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.473437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.473496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.473556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.473643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.473715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.473777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.474953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.475014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.475501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.475564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.475624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.475704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.475766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.475822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.475885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.475956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.476021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.476079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.476137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.476196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.476247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.476306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.166 [2024-10-08 18:17:42.476368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.476426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.476485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.476556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.476615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.476698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.476760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.476821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.476885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.476944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.477970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.478949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.479987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.480948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.481990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.482053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.482112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.482181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.167 [2024-10-08 18:17:42.482241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.482301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.482361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.482421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.482491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.482554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.482615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.482700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.482761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.482827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.482892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.482956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.483032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.483092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.483155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.483214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.483275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.483333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.483393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.483460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.483528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.483587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.483672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.484475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.484530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.484586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.484668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.484738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.484805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.484865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.484929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.485999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.486979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.487961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.488031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.488091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.488151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.488213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.488275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.488344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.488406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.488616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.488706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.488771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.488841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.488904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.488982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.489041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.489105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.489162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.489215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.489277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.489344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.489401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.489461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.489518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.489574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.489631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.168 [2024-10-08 18:17:42.490105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.490168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.490227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.490293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.490352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.490417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.490479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.490544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.490605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.490691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.490754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.490824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.490885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.490959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.491996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.492929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.493928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.494990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.495989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.169 [2024-10-08 18:17:42.496853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.496927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.496999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.497079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.497141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.497222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.498987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.499957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.500982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.501935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.502986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.503991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.504053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.504112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.504171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.504249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.504306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.504357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.504416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.504471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.504530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.170 [2024-10-08 18:17:42.504590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.504684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.504771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.504836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.504897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.504973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.505941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.506033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.506096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.506155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.506215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.506277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.506340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.506406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.506465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.507290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.507352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.507404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.507459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.507518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.507577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.507659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.507721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.507782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.507841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.507901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.507975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.508971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.509944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.510984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.511043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.511102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.511157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.511213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.511272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.171 [2024-10-08 18:17:42.511334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.511544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.511605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.511697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.511768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.511829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.511892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.511956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.512037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.512096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.512155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.512213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.512277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.512345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.512407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.512465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.512527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.512592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.513176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.513242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.513311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.513372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.513434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.513493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.513552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.513612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.513694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.513758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.513818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.513885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.513950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.514951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.515969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.516954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.517994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.518084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.518146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.518209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.518260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.518318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.518376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.518440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.518499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.518559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.518618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.518700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.518755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.172 [2024-10-08 18:17:42.518813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.518875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.518951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.519960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.520935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.521012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.521075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.521138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.521200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.521260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.521323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.521889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.521973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.522987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.523952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.524995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.173 [2024-10-08 18:17:42.525907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.526740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.526806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.526866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.526925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.527912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.528982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.529992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.530050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.530111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.530177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.530246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.530305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.530364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.530424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.530488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.530565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.530625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.530711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.530990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.531977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.532981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.533036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.533095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.533160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.533215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.533274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.533331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.533390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.174 [2024-10-08 18:17:42.533452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.533511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.533564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.533621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.533716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.533780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.533840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.533901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.533975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.534953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.535027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.535841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.535906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.535979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.536976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.537999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.538963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.539970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.175 [2024-10-08 18:17:42.540186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.540248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.540316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.540378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.540439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.540502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.540566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.540629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.540694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.540756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.540815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.540880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.540939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.541001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.541060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.541119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.541566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.541636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.541709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:14.176 [2024-10-08 18:17:42.541771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.541834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.541907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.541993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.542925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.543982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.544929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.545010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.545079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.545145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.545206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.545266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.545341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.545409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.545469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.545529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.545591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.545677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.545749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.545979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.546997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.547052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.547115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.547173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.547233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.547297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.547356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.547423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.176 [2024-10-08 18:17:42.547483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.547549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.547602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.547690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.547760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.547819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.547884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.547950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.548980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.549797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.549863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.549928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.550981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.551993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.552972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.553891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.554957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.177 [2024-10-08 18:17:42.555018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.555956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.556990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.557977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.558035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.558096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.558621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.558706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.558770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.558839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.558909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.558981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.559981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.560949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.561922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.562000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.562065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.178 [2024-10-08 18:17:42.562133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.562195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.562256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.562318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.562384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.562445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.562506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.562571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.562671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.562738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.562803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.563647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.563724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.563793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.563862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.563926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.564984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.565932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.566986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.567054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.567107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.567168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.567228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.567288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.567355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.567414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.567477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.567533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.567588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.567666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.567729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.567979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.568967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.569471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.569546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.569612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.569702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.569770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.569836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.569904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.569983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.570059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.570128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.570186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.179 [2024-10-08 18:17:42.570254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.570312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.570364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.570424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.570484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.570550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.570608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.570694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.570759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.570822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.570889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.570970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.571982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.572976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.573043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.573110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.573171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.573232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.573300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.573361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.573425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.573483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.573543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.573598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.573683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.573899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.573984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.574942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.575953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.576032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.576099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.576162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.180 [2024-10-08 18:17:42.576224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.576289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.576356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.576420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.576482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.576544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.576609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.576703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.576777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.576840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.576905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.577438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.577502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.577566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.577628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.577726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.577797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.577855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.577913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.577996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.578942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.579948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.580983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.581046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.581099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.581155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.581211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.581271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.581337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.581398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.581462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.581529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.581589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.581829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.581901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.581978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.582049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.582111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.582166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.582227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.582285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.582340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.582401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.582466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.582527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.582588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.582671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.582737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.582799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.583462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.583529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.583598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.583693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.583757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.583821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.583885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.583972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.584036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.584099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.584159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.584224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.584296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.181 [2024-10-08 18:17:42.584361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.584423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.584481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.584544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.584603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.584694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.584764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.584824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.584883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.584965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.585997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.586993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.587064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.587130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.587190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.587252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.587312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.587378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.587439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.587502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.587569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.587662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.587882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.587948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.588971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.589985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.590892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.591491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.591567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.591642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.591714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.591781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.591839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.182 [2024-10-08 18:17:42.591902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.591963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.592942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.593931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.594940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.595023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.595086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.595149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.595211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.595278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.595343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.595405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.595469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.595533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.595597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.595678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.595898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.595963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.596854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.597694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.597759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.597820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.597885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.597966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.598979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.599040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.599113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.599175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.599237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.599301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.599363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.599430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.599491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.599555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.183 [2024-10-08 18:17:42.599612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.599701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.599767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.599835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.599900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.599978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.600924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.601862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.602955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.603979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.604988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.605048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.605111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.605850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.605920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.606001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.606069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.606132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.606194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.606254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.606327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.606390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.606454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.606516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.184 [2024-10-08 18:17:42.606572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.606647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.606719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.606788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.606848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.606915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.606983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.607988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.608996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.609937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.610016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.610082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.610297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.610364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.610433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.610496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.610553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.610606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.610688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.610771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.610833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.610904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.610980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.611048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.611107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.611175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.611234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.611297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.611846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.611914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.611978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.612937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.613925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.614003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.614077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.614139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.185 [2024-10-08 18:17:42.614202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.614263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.614333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.614394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.614460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.614519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.614575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.614658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.614724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.614787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.614852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.614913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.614992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.615961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.616167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.616239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.616304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.616367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.616428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:14.186 [2024-10-08 18:17:42.616495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.616558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.616619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.616707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.616776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.616839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.616903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.616979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.617959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.618998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.619059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.619117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.619176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.619233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.620967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.186 [2024-10-08 18:17:42.621981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.622940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.623966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.624030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.624092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.624163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.624226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.624289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.624502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.624567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.624644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.624737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.624801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.624868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.624932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.625016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.625082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.625143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.625205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.625267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.625323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.625383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.625441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.625504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.625978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.626972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.627959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.628941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.629002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.629063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.629131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.629206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.187 [2024-10-08 18:17:42.629287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.629358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.629431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.629495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.629560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.629626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.629699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.629773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.629851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.629913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.629982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.630044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.630104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.630164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.630383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.630444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.630500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.630558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.630618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.630703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.630780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.630848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.630912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.630990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.631954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.632925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.633000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.633064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.633124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.633193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.633252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.633326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.633383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.633446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.634287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.634357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.634423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.634484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.634542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.634604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.634687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.634751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.634816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.634881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.634959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.635985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.636055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.636120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.636182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.636243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.636317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.636378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.188 [2024-10-08 18:17:42.636434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.636492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.636550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.636617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.636711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.636779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.636845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.636909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.636987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.637969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.638027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.638085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.638148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.638213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.638277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.638338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.638398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.638463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.638702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.638770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.638843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.638914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.638994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.639056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.639131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.639195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.639256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.639318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.639381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.639445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.639526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.639594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.639680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.639747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.640308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.640375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.640435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.640501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.640560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.640659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.640732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.640797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.640858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.640913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.640993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.641958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.642992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.643990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.189 [2024-10-08 18:17:42.644051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.644109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.644171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.644232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.644294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.644359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.644417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.644488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.644711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.644785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.644846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.644915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.644970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.645978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.646933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.647955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.648022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.648087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.648146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.648208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.648270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.648332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.648396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.648457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.648518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.648581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.648666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.648733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.648798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.649424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.649489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.649551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.649611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.649704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.649772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.649844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.649909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.649988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.650986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.651044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.651102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.651162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.651222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.651292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.651350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.651407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.651469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.190 [2024-10-08 18:17:42.651530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.651592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.651683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.651751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.651814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.651879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.651946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.652941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.653019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.653079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.653132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.653191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.653258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.653314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.653380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.653444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.653497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.653563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.654491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.654561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.654631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.654716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.654779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.654843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.654908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.654981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 true 00:07:14.191 [2024-10-08 18:17:42.655053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.655987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.656953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.657958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.658035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.658097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.658160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.658221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.658286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.658357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.658422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.658484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.658547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.659100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.659167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.659239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.659303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.659365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.659429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.659496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.659561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.659637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.191 [2024-10-08 18:17:42.659712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.659778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.659839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.659905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.659969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.660997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.661964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.662951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.663022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.663084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.663145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.663210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.663445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.663511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.663573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.663640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.663710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.663793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.663879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.663965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.664981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.665936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.666006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.666070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.666135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.666201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.192 [2024-10-08 18:17:42.666266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.666339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.666410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.666484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.666552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.666617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.666689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.666752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.666831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.666896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.666958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.667022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.667090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.667153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.667219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.667283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.667347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.667421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.667488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.667549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.667611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.193 [2024-10-08 18:17:42.667689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.668271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.668338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.668403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.668465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.668530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.668590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.668661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.668729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.668788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.668849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.668910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.668970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.669977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.670949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.671009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.671087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.671153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.671213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.671272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:07:14.468 [2024-10-08 18:17:42.671338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.671392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.671453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.671517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 18:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.468 [2024-10-08 18:17:42.671582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.671667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.671739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.671801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.671862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.671921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.672007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.672073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.672131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.672190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.672252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.672318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.672374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.672599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.672688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.672757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.672837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.672901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.672980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.468 [2024-10-08 18:17:42.673045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.673116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.673176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.673238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.673299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.673370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.673431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.673493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.673561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.673629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.673719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.673807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.674565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.674624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.674709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.674777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.674838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.674904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.674983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.675942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.676953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.677925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.678001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.678062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.678123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.678188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.678252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.678308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.678369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.678429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.678492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.678566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.678662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.678733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.678969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.679979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.680045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.680115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.680176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.680239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.680300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.680367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.680420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.680479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.680534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.680603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.680690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.469 [2024-10-08 18:17:42.680756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.680832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.680894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.680961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.681969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.682983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.683048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.683114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.683684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.683753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.683829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.683890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.683948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.684943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.685982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.686995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.687060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.687130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.687195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.687258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.687320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.687382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.687450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.687511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.687567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.687626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.687713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.687782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.687844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:14.470 [2024-10-08 18:17:42.688684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.688743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.688804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.470 [2024-10-08 18:17:42.688868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.688939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.689988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.690986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.691986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.692047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.692111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.692173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.692240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.692305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.692367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.692430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.692500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.692573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.692659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.692729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.692800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.693930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.694950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.471 [2024-10-08 18:17:42.695940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.696947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.697010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.697079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.697145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.697206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.698996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.699998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.700989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.701981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.702037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.702248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.702306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.702364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.472 [2024-10-08 18:17:42.702424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.702488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.702545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.702603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.702688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.702757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.702826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.702888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.702965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.703026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.703088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.703153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.703213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.703732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.703813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.703876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.703964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.704983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.705929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.706975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.707039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.707102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.707162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.707219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.707277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.707337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.707407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.707474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.707532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.707594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.707672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.707750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.707979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.708941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.709017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.709079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.709144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.709203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.709263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.709327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.709392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.709451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.709512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.709575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.709659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.473 [2024-10-08 18:17:42.709725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.709792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.709854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.709917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.710976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.711937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.712007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.712515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.712577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.712659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.712737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.712800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.712862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.712921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.713957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.714989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.715980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.716038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.716101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.716160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.716228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.716284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.716338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.716404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.717264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.717330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.717393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.474 [2024-10-08 18:17:42.717452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.717512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.717573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.717648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.717725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.717788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.717854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.717929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.718969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.719957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.720963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.721035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.721090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.721144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.721201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.721802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.721859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.721913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.721979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.722983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.723927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.724005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.724066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.724120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.724173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.724240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.724300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.724368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.724429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.475 [2024-10-08 18:17:42.724493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.724559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.724627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.724714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.724776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.724836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.724900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.724981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.725041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.725102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.725161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.725219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.725278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.725344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.725404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.725461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.725520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.725579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.725669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.725758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.726987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.727940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.728930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.729972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.730793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.730862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.730936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.731015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.731078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.731154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.731213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.476 [2024-10-08 18:17:42.731271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.731332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.731390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.731452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.731509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.731573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.731625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.731712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.731779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.731835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.731892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.731951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.732930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.733986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.734883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.735423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.735486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.735544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.735609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.735689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.735751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.735819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.735879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.735937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.736995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.737960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.738019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.738084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.738142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.738206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.738266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.477 [2024-10-08 18:17:42.738317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.738372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.738436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.738494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.738561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.738618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.738697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.738764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.738826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.738891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.738971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.739044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.739103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.739159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.739215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.739270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.739331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.739570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.739645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.739720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.739784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.739853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.739914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.739976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.740992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.741965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.742984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.743060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.743123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.743175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.743242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.743308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.743367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.743422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.743480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.743539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.743593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.743659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.744482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.744549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.744613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.744699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.744765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.744830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.744901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.744979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.478 [2024-10-08 18:17:42.745996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.746928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.747922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.748000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.748066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.748128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.748187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.748248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.748313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.748377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.748447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.748504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.748565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.748810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.748876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.748936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.749011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.749075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.749134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.749194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.749250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.749309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.749362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.749421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.749477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.749536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.749601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.749681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.749748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.750249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.750310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.750377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.750435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.750489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.750555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.750611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.750700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.750765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.750827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.750894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.750970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.751987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.752937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.753015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.753072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.753129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.479 [2024-10-08 18:17:42.753192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.753251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.753302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.753358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.753416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.753475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.753530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.753587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.753667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.753728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.753789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.753847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.753911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.753991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.754050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.754107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.754167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.754227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.754427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.754488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.754549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.754617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.754710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.754777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.754840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.754906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.754987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.755969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.756987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.757060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.757124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.757180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.757237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.757301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.757361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.757422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.757952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.758972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.759956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.760030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.760089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.760149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.760202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.760261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.760315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.760380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.480 [2024-10-08 18:17:42.760436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.760492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.760547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.760603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.760684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.760745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.760805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.760868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.760930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.761996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.762219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.762288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 Message suppressed 999 times: [2024-10-08 18:17:42.762350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 Read completed with error (sct=0, sc=15) 00:07:14.481 [2024-10-08 18:17:42.762413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.762477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.762546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.762605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.762690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.762755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.762828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.762903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.762981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.763042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.763100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.763165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.763233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.763983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.764956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.765948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.766990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.767053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.767114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.481 [2024-10-08 18:17:42.767168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.767228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.767287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.767344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.767403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.767460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.767513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.767572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.767626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.767715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.767776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.767842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.767897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.767972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.768181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.768241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.768298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.768356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.768411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.768465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.768525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.768586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.768667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.768736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.768805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.768864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.768926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.769966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.770933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.771962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.772022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.772083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.772147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.772207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.773997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.774057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.774121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.774181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.774243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.774302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.774364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.774422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.774477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.774533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.774592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.774673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.774736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.482 [2024-10-08 18:17:42.774794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.774860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.774914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.774971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.775973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.776968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.777049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.777111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.777348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.777410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.777474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.777541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.777602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.777688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.777775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.777849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.777916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.778962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.779942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.780996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.781057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.781118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.781184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.781264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.781325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.781387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.782218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.782272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.782331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.483 [2024-10-08 18:17:42.782390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.782447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.782512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.782575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.782626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.782730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.782794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.782858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.782919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.783932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.784981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.785930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.786008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.786067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.786126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.786192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.786248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.786447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.786512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.786569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.786625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.786724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.786790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.786855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.786921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.787003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.787084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.787144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.787205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.787269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.787334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.787397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.787455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.787518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.788986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.789061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.789130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.789193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.789253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.789321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.789380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.789437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.789498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.789558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.789615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.789699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.484 [2024-10-08 18:17:42.789761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.789821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.789884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.789946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.790934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.791982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.792042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.792103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.792162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.792375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.792437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.792495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.792557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.792619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.792705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.792777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.792839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.792902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.792981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.793989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.794974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.795989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.485 [2024-10-08 18:17:42.796048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.796106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.796161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.796222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.796285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.796345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.796407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.797262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.797337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.797399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.797460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.797523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.797591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.797686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.797752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.797824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.797891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.797954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.798979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.799990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.800983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.801058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.801120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.801180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.801242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.801300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.801365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.801576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.801666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.801735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.801802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.801860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.801925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.802011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.802073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.802131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.802197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.802261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.802323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.802381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.802439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.802500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.802563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.802627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.803987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.486 [2024-10-08 18:17:42.804045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.804987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.805977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.806930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.807000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.807062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.807123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.807186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.807395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.807460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.807524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.807592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.807677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.807745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.807808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.807883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.807946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.808023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.808083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.808143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.808208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.808282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.808341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.808404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.809989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.810960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.811037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.811095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.811145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.811196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.811249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.811311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.811371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.487 [2024-10-08 18:17:42.811431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.811489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.811551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.811614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.811701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.811771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.811841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.811903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.811964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.812985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.813047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.813263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.813326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.813392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.813450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.813517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.813574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.813637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.813722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.813783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.813848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.813905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.813977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.814954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.815975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.816945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.817028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.817087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.817154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.817215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.817269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.817325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.818227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.818291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.818348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.818408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.818467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.818530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.818591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.818658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.818745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.818816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.818880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.818944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.819025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.819088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.819152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.819218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.488 [2024-10-08 18:17:42.819279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.819345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.819414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.819469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.819530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.819593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.819678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.819741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.819809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.819863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.819922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.820954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.821966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.822029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.822082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.822138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.822206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.822270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.822330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.822571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.822631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.822720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.822786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.822850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.822919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.823921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.824989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.825050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.825114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.825175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.825241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.825300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.825367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.825425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.825477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.489 [2024-10-08 18:17:42.825537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.825599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.825681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.825748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.825809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.825874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.825934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.826008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.826064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.826129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.826191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.826253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.826317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.826384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.826445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.826506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.826565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.826642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.827473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.827534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.827594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.827683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.827746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.827815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.827876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.827938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.828970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.829996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.830945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.831011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.831082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.831146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.831216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.831295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.831356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.831422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.831489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.831551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.831610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.831693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.831918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.831995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.832931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.833450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.833512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.833573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.833647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.490 [2024-10-08 18:17:42.833718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:14.491 [2024-10-08 18:17:42.833780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.833841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.833911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.833987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.834980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.835928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.836002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.836066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.836126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.836190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.836257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.836317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.836381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:14.491 [2024-10-08 18:17:42.836441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:15.427 18:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.687 Initializing NVMe Controllers 00:07:15.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:15.687 Controller IO queue size 128, less than required. 00:07:15.687 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.687 Controller IO queue size 128, less than required. 00:07:15.687 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:15.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:15.688 Initialization complete. Launching workers. 00:07:15.688 ======================================================== 00:07:15.688 Latency(us) 00:07:15.688 Device Information : IOPS MiB/s Average min max 00:07:15.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4395.48 2.15 21227.51 2603.89 1015405.16 00:07:15.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14619.07 7.14 8755.93 2746.27 446868.88 00:07:15.688 ======================================================== 00:07:15.688 Total : 19014.55 9.28 11638.91 2603.89 1015405.16 00:07:15.688 00:07:15.688 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:15.688 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:16.627 true 00:07:16.627 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1079831 00:07:16.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1079831) - No such process 00:07:16.627 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1079831 00:07:16.627 18:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.887 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.457 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:17.457 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:17.457 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:17.457 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.457 18:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:17.716 null0 00:07:17.976 18:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.976 18:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.976 18:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:18.546 null1 00:07:18.546 18:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.546 18:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.546 18:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:18.805 null2 00:07:18.805 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.805 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.805 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:19.374 null3 00:07:19.374 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.374 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.374 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:19.943 null4 00:07:19.943 18:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.943 18:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.943 18:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:20.202 null5 00:07:20.202 18:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.202 18:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.202 18:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:20.773 null6 00:07:20.773 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.773 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.773 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:21.371 null7 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.371 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1084130 1084131 1084133 1084135 1084137 1084139 1084141 1084143 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.372 18:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.690 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.690 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.690 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.690 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.690 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.690 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.690 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.690 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.973 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.231 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.231 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.231 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.232 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.232 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.232 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.232 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.232 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.490 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.490 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.490 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.490 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.490 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.490 18:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.490 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.490 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.491 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.491 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.491 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.491 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.491 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.491 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.749 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.749 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.749 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.749 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.749 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.749 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.749 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.749 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.749 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.749 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.749 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.749 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.008 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.008 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.008 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.008 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.008 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.008 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.008 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.008 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.008 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.266 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.524 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.525 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.525 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.525 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.525 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.525 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.525 18:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.783 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.041 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.041 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.041 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.041 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.299 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.299 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.299 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.299 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.299 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.299 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.299 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.299 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.299 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.299 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.558 18:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.558 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.816 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.816 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.816 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.816 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.816 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.816 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.816 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.074 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.332 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.332 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.332 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.332 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.332 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.332 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.332 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.332 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.332 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.332 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.332 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.332 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.332 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.591 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.591 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.591 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.591 18:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.591 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.591 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.591 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.591 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.591 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.591 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.591 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.591 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.591 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.591 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.591 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.591 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.849 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.849 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.849 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.850 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.850 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.850 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.850 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.850 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.850 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.850 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.850 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.850 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.107 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.107 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.107 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.107 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.107 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.107 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.365 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.365 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.365 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.365 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.365 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.365 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.365 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.365 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.365 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.365 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.365 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.365 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.365 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.366 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.366 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.366 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.366 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.366 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.366 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.366 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.624 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.624 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.624 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.624 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.624 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.624 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.624 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.624 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.624 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.624 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.624 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.624 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.883 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.883 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.883 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.883 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.883 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.883 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.883 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.883 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.142 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.401 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.401 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.401 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.401 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.401 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.401 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.401 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.401 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.401 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:27.660 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:27.918 rmmod nvme_tcp 00:07:27.918 rmmod nvme_fabrics 00:07:27.918 rmmod nvme_keyring 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1079289 ']' 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1079289 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1079289 ']' 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1079289 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1079289 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1079289' 00:07:27.918 killing process with pid 1079289 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1079289 00:07:27.918 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1079289 00:07:28.487 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:28.487 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:28.487 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:28.487 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:28.487 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:07:28.487 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:28.487 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:07:28.487 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.487 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:28.487 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.487 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.487 18:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.397 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:30.397 00:07:30.397 real 0m52.678s 00:07:30.397 user 3m58.246s 00:07:30.397 sys 0m18.587s 00:07:30.397 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.397 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:30.397 ************************************ 00:07:30.397 END TEST nvmf_ns_hotplug_stress 00:07:30.397 ************************************ 00:07:30.397 18:17:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:30.397 18:17:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.397 18:17:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.397 18:17:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.397 ************************************ 00:07:30.397 START TEST nvmf_delete_subsystem 00:07:30.397 ************************************ 00:07:30.397 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:30.657 * Looking for test storage... 00:07:30.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.657 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:30.657 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:07:30.657 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:30.657 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:30.657 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.657 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:30.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.658 --rc genhtml_branch_coverage=1 00:07:30.658 --rc genhtml_function_coverage=1 00:07:30.658 --rc genhtml_legend=1 00:07:30.658 --rc geninfo_all_blocks=1 00:07:30.658 --rc geninfo_unexecuted_blocks=1 00:07:30.658 00:07:30.658 ' 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:30.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.658 --rc genhtml_branch_coverage=1 00:07:30.658 --rc genhtml_function_coverage=1 00:07:30.658 --rc genhtml_legend=1 00:07:30.658 --rc geninfo_all_blocks=1 00:07:30.658 --rc geninfo_unexecuted_blocks=1 00:07:30.658 00:07:30.658 ' 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:30.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.658 --rc genhtml_branch_coverage=1 00:07:30.658 --rc genhtml_function_coverage=1 00:07:30.658 --rc genhtml_legend=1 00:07:30.658 --rc geninfo_all_blocks=1 00:07:30.658 --rc geninfo_unexecuted_blocks=1 00:07:30.658 00:07:30.658 ' 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:30.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.658 --rc genhtml_branch_coverage=1 00:07:30.658 --rc genhtml_function_coverage=1 00:07:30.658 --rc genhtml_legend=1 00:07:30.658 --rc geninfo_all_blocks=1 00:07:30.658 --rc geninfo_unexecuted_blocks=1 00:07:30.658 00:07:30.658 ' 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.658 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.659 18:17:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.952 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:33.953 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:33.953 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:33.953 Found net devices under 0000:84:00.0: cvl_0_0 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:33.953 Found net devices under 0000:84:00.1: cvl_0_1 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:33.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:07:33.953 00:07:33.953 --- 10.0.0.2 ping statistics --- 00:07:33.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.953 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:07:33.953 00:07:33.953 --- 10.0.0.1 ping statistics --- 00:07:33.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.953 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1087282 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1087282 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1087282 ']' 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.953 18:18:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.953 [2024-10-08 18:18:02.317982] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:07:33.953 [2024-10-08 18:18:02.318064] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.953 [2024-10-08 18:18:02.422919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:34.214 [2024-10-08 18:18:02.638112] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.214 [2024-10-08 18:18:02.638224] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.214 [2024-10-08 18:18:02.638260] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.214 [2024-10-08 18:18:02.638290] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.214 [2024-10-08 18:18:02.638317] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.214 [2024-10-08 18:18:02.640149] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.214 [2024-10-08 18:18:02.640167] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.154 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.154 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:35.154 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:35.154 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:35.154 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.154 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.154 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.154 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.155 [2024-10-08 18:18:03.534843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.155 [2024-10-08 18:18:03.562765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.155 NULL1 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.155 Delay0 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1087453 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:35.155 18:18:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:35.415 [2024-10-08 18:18:03.692606] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:37.317 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.317 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.317 18:18:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.317 Write completed with error (sct=0, sc=8) 00:07:37.317 Write completed with error (sct=0, sc=8) 00:07:37.317 starting I/O failed: -6 00:07:37.317 Write completed with error (sct=0, sc=8) 00:07:37.317 Write completed with error (sct=0, sc=8) 00:07:37.317 Read completed with error (sct=0, sc=8) 00:07:37.317 Read completed with error (sct=0, sc=8) 00:07:37.317 starting I/O failed: -6 00:07:37.317 Write completed with error (sct=0, sc=8) 00:07:37.317 Read completed with error (sct=0, sc=8) 00:07:37.317 Read completed with error (sct=0, sc=8) 00:07:37.317 Read completed with error (sct=0, sc=8) 00:07:37.317 starting I/O failed: -6 00:07:37.317 Read completed with error (sct=0, sc=8) 00:07:37.317 Read completed with error (sct=0, sc=8) 00:07:37.317 Read completed with error (sct=0, sc=8) 00:07:37.317 Write completed with error (sct=0, sc=8) 00:07:37.317 starting I/O failed: -6 00:07:37.317 Write completed with error (sct=0, sc=8) 00:07:37.317 Write completed with error (sct=0, sc=8) 00:07:37.317 Read completed with error (sct=0, sc=8) 00:07:37.317 Read completed with error (sct=0, sc=8) 00:07:37.317 starting I/O failed: -6 00:07:37.317 Write completed with error (sct=0, sc=8) 00:07:37.317 Read completed with error (sct=0, sc=8) 00:07:37.317 Read completed with error (sct=0, sc=8) 00:07:37.317 Write completed with error (sct=0, sc=8) 00:07:37.317 starting I/O failed: -6 00:07:37.317 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 [2024-10-08 18:18:05.842725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa31c00cff0 is same with the state(6) to be set 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 starting I/O failed: -6 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 [2024-10-08 18:18:05.843620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b6750 is same with the state(6) to be set 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Read completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:37.318 Write completed with error (sct=0, sc=8) 00:07:38.695 [2024-10-08 18:18:06.793531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b7a70 is same with the state(6) to be set 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 [2024-10-08 18:18:06.845198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa31c00d320 is same with the state(6) to be set 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 [2024-10-08 18:18:06.845869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b6570 is same with the state(6) to be set 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 [2024-10-08 18:18:06.846323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b6930 is same with the state(6) to be set 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Write completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 Read completed with error (sct=0, sc=8) 00:07:38.695 [2024-10-08 18:18:06.847269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b6390 is same with the state(6) to be set 00:07:38.695 Initializing NVMe Controllers 00:07:38.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:38.695 Controller IO queue size 128, less than required. 00:07:38.695 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:38.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:38.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:38.695 Initialization complete. Launching workers. 00:07:38.695 ======================================================== 00:07:38.695 Latency(us) 00:07:38.695 Device Information : IOPS MiB/s Average min max 00:07:38.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.49 0.09 980648.24 1208.34 2003235.66 00:07:38.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.66 0.08 867895.30 566.55 1013010.19 00:07:38.695 ======================================================== 00:07:38.695 Total : 335.16 0.16 927607.66 566.55 2003235.66 00:07:38.695 00:07:38.695 [2024-10-08 18:18:06.848274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b7a70 (9): Bad file descriptor 00:07:38.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:38.695 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.695 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:38.695 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1087453 00:07:38.695 18:18:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1087453 00:07:38.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1087453) - No such process 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1087453 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1087453 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1087453 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.955 [2024-10-08 18:18:07.380095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1087863 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1087863 00:07:38.955 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.955 [2024-10-08 18:18:07.464139] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:39.524 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.524 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1087863 00:07:39.524 18:18:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.092 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.092 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1087863 00:07:40.092 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.657 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.657 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1087863 00:07:40.657 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.948 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.948 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1087863 00:07:40.948 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.514 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.514 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1087863 00:07:41.514 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.080 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.080 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1087863 00:07:42.080 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.080 Initializing NVMe Controllers 00:07:42.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:42.080 Controller IO queue size 128, less than required. 00:07:42.080 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:42.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:42.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:42.080 Initialization complete. Launching workers. 00:07:42.080 ======================================================== 00:07:42.080 Latency(us) 00:07:42.080 Device Information : IOPS MiB/s Average min max 00:07:42.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004054.74 1000220.65 1042663.39 00:07:42.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004247.98 1000162.17 1013593.57 00:07:42.080 ======================================================== 00:07:42.080 Total : 256.00 0.12 1004151.36 1000162.17 1042663.39 00:07:42.080 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1087863 00:07:42.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1087863) - No such process 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1087863 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:42.649 rmmod nvme_tcp 00:07:42.649 rmmod nvme_fabrics 00:07:42.649 rmmod nvme_keyring 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:42.649 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:42.650 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1087282 ']' 00:07:42.650 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1087282 00:07:42.650 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1087282 ']' 00:07:42.650 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1087282 00:07:42.650 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:42.650 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.650 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1087282 00:07:42.650 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.650 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.650 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1087282' 00:07:42.650 killing process with pid 1087282 00:07:42.650 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1087282 00:07:42.650 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1087282 00:07:43.220 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:43.220 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:43.220 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:43.220 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:43.220 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:07:43.220 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:43.220 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:07:43.220 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.220 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:43.220 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.220 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.220 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.129 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.129 00:07:45.129 real 0m14.659s 00:07:45.129 user 0m30.525s 00:07:45.129 sys 0m4.062s 00:07:45.129 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.129 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.129 ************************************ 00:07:45.129 END TEST nvmf_delete_subsystem 00:07:45.129 ************************************ 00:07:45.129 18:18:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:45.129 18:18:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:45.129 18:18:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.129 18:18:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.129 ************************************ 00:07:45.129 START TEST nvmf_host_management 00:07:45.129 ************************************ 00:07:45.129 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:45.129 * Looking for test storage... 00:07:45.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.389 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:45.389 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:45.389 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:45.389 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:45.389 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.389 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.389 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.389 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.389 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.389 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:45.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.390 --rc genhtml_branch_coverage=1 00:07:45.390 --rc genhtml_function_coverage=1 00:07:45.390 --rc genhtml_legend=1 00:07:45.390 --rc geninfo_all_blocks=1 00:07:45.390 --rc geninfo_unexecuted_blocks=1 00:07:45.390 00:07:45.390 ' 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:45.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.390 --rc genhtml_branch_coverage=1 00:07:45.390 --rc genhtml_function_coverage=1 00:07:45.390 --rc genhtml_legend=1 00:07:45.390 --rc geninfo_all_blocks=1 00:07:45.390 --rc geninfo_unexecuted_blocks=1 00:07:45.390 00:07:45.390 ' 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:45.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.390 --rc genhtml_branch_coverage=1 00:07:45.390 --rc genhtml_function_coverage=1 00:07:45.390 --rc genhtml_legend=1 00:07:45.390 --rc geninfo_all_blocks=1 00:07:45.390 --rc geninfo_unexecuted_blocks=1 00:07:45.390 00:07:45.390 ' 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:45.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.390 --rc genhtml_branch_coverage=1 00:07:45.390 --rc genhtml_function_coverage=1 00:07:45.390 --rc genhtml_legend=1 00:07:45.390 --rc geninfo_all_blocks=1 00:07:45.390 --rc geninfo_unexecuted_blocks=1 00:07:45.390 00:07:45.390 ' 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.390 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:47.931 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:47.931 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:47.931 Found net devices under 0000:84:00.0: cvl_0_0 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:47.931 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:47.932 Found net devices under 0000:84:00.1: cvl_0_1 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:47.932 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:48.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:07:48.192 00:07:48.192 --- 10.0.0.2 ping statistics --- 00:07:48.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.192 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:07:48.192 00:07:48.192 --- 10.0.0.1 ping statistics --- 00:07:48.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.192 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1090858 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1090858 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1090858 ']' 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.192 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.192 [2024-10-08 18:18:16.665428] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:07:48.192 [2024-10-08 18:18:16.665622] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.453 [2024-10-08 18:18:16.826524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.713 [2024-10-08 18:18:17.052752] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.713 [2024-10-08 18:18:17.052846] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.713 [2024-10-08 18:18:17.052882] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.713 [2024-10-08 18:18:17.052911] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.713 [2024-10-08 18:18:17.052938] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.713 [2024-10-08 18:18:17.056207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.713 [2024-10-08 18:18:17.056306] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.713 [2024-10-08 18:18:17.056362] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:07:48.713 [2024-10-08 18:18:17.056366] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.713 [2024-10-08 18:18:17.227971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.713 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 Malloc0 00:07:48.981 [2024-10-08 18:18:17.294462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1091031 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1091031 /var/tmp/bdevperf.sock 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1091031 ']' 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:48.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:48.981 { 00:07:48.981 "params": { 00:07:48.981 "name": "Nvme$subsystem", 00:07:48.981 "trtype": "$TEST_TRANSPORT", 00:07:48.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.981 "adrfam": "ipv4", 00:07:48.981 "trsvcid": "$NVMF_PORT", 00:07:48.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.981 "hdgst": ${hdgst:-false}, 00:07:48.981 "ddgst": ${ddgst:-false} 00:07:48.981 }, 00:07:48.981 "method": "bdev_nvme_attach_controller" 00:07:48.981 } 00:07:48.981 EOF 00:07:48.981 )") 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:48.981 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:48.981 "params": { 00:07:48.981 "name": "Nvme0", 00:07:48.981 "trtype": "tcp", 00:07:48.981 "traddr": "10.0.0.2", 00:07:48.981 "adrfam": "ipv4", 00:07:48.981 "trsvcid": "4420", 00:07:48.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:48.981 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:48.981 "hdgst": false, 00:07:48.981 "ddgst": false 00:07:48.981 }, 00:07:48.981 "method": "bdev_nvme_attach_controller" 00:07:48.981 }' 00:07:48.981 [2024-10-08 18:18:17.386441] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:07:48.981 [2024-10-08 18:18:17.386531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091031 ] 00:07:48.981 [2024-10-08 18:18:17.454962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.302 [2024-10-08 18:18:17.575991] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.586 Running I/O for 10 seconds... 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:49.586 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.845 [2024-10-08 18:18:18.314170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.845 [2024-10-08 18:18:18.314246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.845 [2024-10-08 18:18:18.314275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.845 [2024-10-08 18:18:18.314302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.314317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.845 [2024-10-08 18:18:18.314331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.314345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.845 [2024-10-08 18:18:18.314367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.314380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x803100 is same with the state(6) to be set 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.845 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:49.845 [2024-10-08 18:18:18.322358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.322969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.322984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.323006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.845 [2024-10-08 18:18:18.323022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.845 [2024-10-08 18:18:18.323048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.323959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.323991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.324004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.324019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.324032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.324046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.324060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.324075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.324088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.324102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.324116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.324135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.324149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.324164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.324178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.324196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.324210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.324224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.324244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.324258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.324272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.324286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.324299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.846 [2024-10-08 18:18:18.324314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.846 [2024-10-08 18:18:18.324327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.847 [2024-10-08 18:18:18.324342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.847 [2024-10-08 18:18:18.324355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.847 [2024-10-08 18:18:18.324370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.847 [2024-10-08 18:18:18.324384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.847 [2024-10-08 18:18:18.324478] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa1c1f0 was disconnected and freed. reset controller. 00:07:49.847 [2024-10-08 18:18:18.324525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x803100 (9): Bad file descriptor 00:07:49.847 [2024-10-08 18:18:18.325659] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:49.847 task offset: 90112 on job bdev=Nvme0n1 fails 00:07:49.847 00:07:49.847 Latency(us) 00:07:49.847 [2024-10-08T16:18:18.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.847 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:49.847 Job: Nvme0n1 ended in about 0.47 seconds with error 00:07:49.847 Verification LBA range: start 0x0 length 0x400 00:07:49.847 Nvme0n1 : 0.47 1501.86 93.87 136.53 0.00 38011.80 2536.49 36117.62 00:07:49.847 [2024-10-08T16:18:18.384Z] =================================================================================================================== 00:07:49.847 [2024-10-08T16:18:18.384Z] Total : 1501.86 93.87 136.53 0.00 38011.80 2536.49 36117.62 00:07:49.847 [2024-10-08 18:18:18.328460] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.105 [2024-10-08 18:18:18.390042] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:51.038 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1091031 00:07:51.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1091031) - No such process 00:07:51.038 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:51.038 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:51.038 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:51.038 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:51.038 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:51.038 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:51.038 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:51.038 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:51.038 { 00:07:51.038 "params": { 00:07:51.038 "name": "Nvme$subsystem", 00:07:51.038 "trtype": "$TEST_TRANSPORT", 00:07:51.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.038 "adrfam": "ipv4", 00:07:51.038 "trsvcid": "$NVMF_PORT", 00:07:51.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.038 "hdgst": ${hdgst:-false}, 00:07:51.038 "ddgst": ${ddgst:-false} 00:07:51.038 }, 00:07:51.038 "method": "bdev_nvme_attach_controller" 00:07:51.038 } 00:07:51.038 EOF 00:07:51.038 )") 00:07:51.038 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:51.038 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:51.038 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:51.038 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:51.038 "params": { 00:07:51.038 "name": "Nvme0", 00:07:51.038 "trtype": "tcp", 00:07:51.038 "traddr": "10.0.0.2", 00:07:51.038 "adrfam": "ipv4", 00:07:51.038 "trsvcid": "4420", 00:07:51.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.038 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:51.038 "hdgst": false, 00:07:51.038 "ddgst": false 00:07:51.038 }, 00:07:51.038 "method": "bdev_nvme_attach_controller" 00:07:51.038 }' 00:07:51.038 [2024-10-08 18:18:19.383542] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:07:51.038 [2024-10-08 18:18:19.383728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091197 ] 00:07:51.038 [2024-10-08 18:18:19.463280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.296 [2024-10-08 18:18:19.577815] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.554 Running I/O for 1 seconds... 00:07:52.487 1536.00 IOPS, 96.00 MiB/s 00:07:52.487 Latency(us) 00:07:52.487 [2024-10-08T16:18:21.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.488 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:52.488 Verification LBA range: start 0x0 length 0x400 00:07:52.488 Nvme0n1 : 1.03 1549.19 96.82 0.00 0.00 40515.87 6602.15 42137.22 00:07:52.488 [2024-10-08T16:18:21.025Z] =================================================================================================================== 00:07:52.488 [2024-10-08T16:18:21.025Z] Total : 1549.19 96.82 0.00 0.00 40515.87 6602.15 42137.22 00:07:52.745 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:52.745 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:52.745 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:52.745 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:52.745 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:52.745 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:52.745 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:52.745 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:52.745 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:52.745 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:52.745 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:52.745 rmmod nvme_tcp 00:07:52.745 rmmod nvme_fabrics 00:07:52.745 rmmod nvme_keyring 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1090858 ']' 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1090858 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1090858 ']' 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1090858 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1090858 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1090858' 00:07:53.004 killing process with pid 1090858 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1090858 00:07:53.004 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1090858 00:07:53.265 [2024-10-08 18:18:21.703098] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:53.265 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:53.265 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:53.265 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:53.265 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:53.265 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:53.265 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:53.265 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:53.265 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:53.265 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:53.265 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.265 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.265 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.807 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:55.807 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:55.807 00:07:55.807 real 0m10.226s 00:07:55.807 user 0m22.386s 00:07:55.807 sys 0m3.563s 00:07:55.807 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.808 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.808 ************************************ 00:07:55.808 END TEST nvmf_host_management 00:07:55.808 ************************************ 00:07:55.808 18:18:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:55.808 18:18:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:55.808 18:18:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.808 18:18:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:55.808 ************************************ 00:07:55.808 START TEST nvmf_lvol 00:07:55.808 ************************************ 00:07:55.808 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:55.808 * Looking for test storage... 00:07:55.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.808 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:55.808 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:55.808 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:55.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.808 --rc genhtml_branch_coverage=1 00:07:55.808 --rc genhtml_function_coverage=1 00:07:55.808 --rc genhtml_legend=1 00:07:55.808 --rc geninfo_all_blocks=1 00:07:55.808 --rc geninfo_unexecuted_blocks=1 00:07:55.808 00:07:55.808 ' 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:55.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.808 --rc genhtml_branch_coverage=1 00:07:55.808 --rc genhtml_function_coverage=1 00:07:55.808 --rc genhtml_legend=1 00:07:55.808 --rc geninfo_all_blocks=1 00:07:55.808 --rc geninfo_unexecuted_blocks=1 00:07:55.808 00:07:55.808 ' 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:55.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.808 --rc genhtml_branch_coverage=1 00:07:55.808 --rc genhtml_function_coverage=1 00:07:55.808 --rc genhtml_legend=1 00:07:55.808 --rc geninfo_all_blocks=1 00:07:55.808 --rc geninfo_unexecuted_blocks=1 00:07:55.808 00:07:55.808 ' 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:55.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.808 --rc genhtml_branch_coverage=1 00:07:55.808 --rc genhtml_function_coverage=1 00:07:55.808 --rc genhtml_legend=1 00:07:55.808 --rc geninfo_all_blocks=1 00:07:55.808 --rc geninfo_unexecuted_blocks=1 00:07:55.808 00:07:55.808 ' 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:55.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:55.808 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:55.809 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:59.102 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:59.102 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:59.102 Found net devices under 0000:84:00.0: cvl_0_0 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:59.102 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:59.103 Found net devices under 0000:84:00.1: cvl_0_1 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:59.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:07:59.103 00:07:59.103 --- 10.0.0.2 ping statistics --- 00:07:59.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.103 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:07:59.103 00:07:59.103 --- 10.0.0.1 ping statistics --- 00:07:59.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.103 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1093552 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1093552 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1093552 ']' 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.103 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.103 [2024-10-08 18:18:27.401465] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:07:59.103 [2024-10-08 18:18:27.401577] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.103 [2024-10-08 18:18:27.528023] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.362 [2024-10-08 18:18:27.746809] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.362 [2024-10-08 18:18:27.746863] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.362 [2024-10-08 18:18:27.746880] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.362 [2024-10-08 18:18:27.746895] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.362 [2024-10-08 18:18:27.746934] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.362 [2024-10-08 18:18:27.748698] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.362 [2024-10-08 18:18:27.748758] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.362 [2024-10-08 18:18:27.748763] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.621 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.621 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:59.621 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:59.621 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:59.621 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.621 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.621 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:00.187 [2024-10-08 18:18:28.618965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.187 18:18:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:00.755 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:00.755 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:01.323 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:01.323 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:01.891 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:02.150 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1c0bdb54-2c77-4d45-92d3-6930e7f3e192 00:08:02.150 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1c0bdb54-2c77-4d45-92d3-6930e7f3e192 lvol 20 00:08:02.408 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=842eeffc-c4c1-452d-9d1c-4cddac596117 00:08:02.408 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:02.974 18:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 842eeffc-c4c1-452d-9d1c-4cddac596117 00:08:03.540 18:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:04.106 [2024-10-08 18:18:32.472777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.106 18:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.364 18:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1094243 00:08:04.364 18:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:04.364 18:18:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:05.738 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 842eeffc-c4c1-452d-9d1c-4cddac596117 MY_SNAPSHOT 00:08:05.997 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9d002bc4-77b0-422e-be89-1cb1842c2f53 00:08:05.997 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 842eeffc-c4c1-452d-9d1c-4cddac596117 30 00:08:06.255 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9d002bc4-77b0-422e-be89-1cb1842c2f53 MY_CLONE 00:08:06.821 18:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d60e4577-2b87-406d-8448-bd34b036704c 00:08:06.821 18:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d60e4577-2b87-406d-8448-bd34b036704c 00:08:07.755 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1094243 00:08:15.866 Initializing NVMe Controllers 00:08:15.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:15.866 Controller IO queue size 128, less than required. 00:08:15.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:15.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:15.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:15.866 Initialization complete. Launching workers. 00:08:15.866 ======================================================== 00:08:15.866 Latency(us) 00:08:15.866 Device Information : IOPS MiB/s Average min max 00:08:15.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10376.20 40.53 12342.58 1355.89 83975.48 00:08:15.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10286.00 40.18 12443.59 2313.21 76459.34 00:08:15.866 ======================================================== 00:08:15.866 Total : 20662.20 80.71 12392.87 1355.89 83975.48 00:08:15.866 00:08:15.866 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:15.866 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 842eeffc-c4c1-452d-9d1c-4cddac596117 00:08:15.866 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1c0bdb54-2c77-4d45-92d3-6930e7f3e192 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:16.432 rmmod nvme_tcp 00:08:16.432 rmmod nvme_fabrics 00:08:16.432 rmmod nvme_keyring 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1093552 ']' 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1093552 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1093552 ']' 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1093552 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1093552 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1093552' 00:08:16.432 killing process with pid 1093552 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1093552 00:08:16.432 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1093552 00:08:17.000 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:17.000 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:17.000 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:17.000 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:17.000 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:08:17.000 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:17.000 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:08:17.000 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:17.000 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:17.000 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.000 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.000 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.905 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:18.905 00:08:18.905 real 0m23.543s 00:08:18.905 user 1m17.793s 00:08:18.905 sys 0m7.060s 00:08:18.905 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.905 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:18.905 ************************************ 00:08:18.905 END TEST nvmf_lvol 00:08:18.905 ************************************ 00:08:19.164 18:18:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:19.164 18:18:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:19.164 18:18:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.164 18:18:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.164 ************************************ 00:08:19.164 START TEST nvmf_lvs_grow 00:08:19.164 ************************************ 00:08:19.164 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:19.164 * Looking for test storage... 00:08:19.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.164 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:19.164 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:19.164 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:19.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.425 --rc genhtml_branch_coverage=1 00:08:19.425 --rc genhtml_function_coverage=1 00:08:19.425 --rc genhtml_legend=1 00:08:19.425 --rc geninfo_all_blocks=1 00:08:19.425 --rc geninfo_unexecuted_blocks=1 00:08:19.425 00:08:19.425 ' 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:19.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.425 --rc genhtml_branch_coverage=1 00:08:19.425 --rc genhtml_function_coverage=1 00:08:19.425 --rc genhtml_legend=1 00:08:19.425 --rc geninfo_all_blocks=1 00:08:19.425 --rc geninfo_unexecuted_blocks=1 00:08:19.425 00:08:19.425 ' 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:19.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.425 --rc genhtml_branch_coverage=1 00:08:19.425 --rc genhtml_function_coverage=1 00:08:19.425 --rc genhtml_legend=1 00:08:19.425 --rc geninfo_all_blocks=1 00:08:19.425 --rc geninfo_unexecuted_blocks=1 00:08:19.425 00:08:19.425 ' 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:19.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.425 --rc genhtml_branch_coverage=1 00:08:19.425 --rc genhtml_function_coverage=1 00:08:19.425 --rc genhtml_legend=1 00:08:19.425 --rc geninfo_all_blocks=1 00:08:19.425 --rc geninfo_unexecuted_blocks=1 00:08:19.425 00:08:19.425 ' 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.425 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:19.426 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:22.731 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:22.731 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:22.731 Found net devices under 0000:84:00.0: cvl_0_0 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:22.731 Found net devices under 0000:84:00.1: cvl_0_1 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.731 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:22.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:08:22.732 00:08:22.732 --- 10.0.0.2 ping statistics --- 00:08:22.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.732 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:08:22.732 00:08:22.732 --- 10.0.0.1 ping statistics --- 00:08:22.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.732 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1097678 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1097678 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1097678 ']' 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.732 18:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:22.732 [2024-10-08 18:18:50.956442] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:08:22.732 [2024-10-08 18:18:50.956618] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.732 [2024-10-08 18:18:51.110453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.993 [2024-10-08 18:18:51.318066] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.993 [2024-10-08 18:18:51.318174] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.993 [2024-10-08 18:18:51.318213] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.993 [2024-10-08 18:18:51.318243] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.993 [2024-10-08 18:18:51.318269] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.993 [2024-10-08 18:18:51.319422] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.993 18:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.993 18:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:22.993 18:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:22.993 18:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:22.993 18:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.253 18:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.253 18:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:23.821 [2024-10-08 18:18:52.199400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.821 ************************************ 00:08:23.821 START TEST lvs_grow_clean 00:08:23.821 ************************************ 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.821 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.391 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:24.391 18:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:24.651 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a860f52b-0511-495b-89d7-b35717544fac 00:08:24.651 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:24.651 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a860f52b-0511-495b-89d7-b35717544fac 00:08:25.590 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:25.590 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:25.590 18:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a860f52b-0511-495b-89d7-b35717544fac lvol 150 00:08:25.851 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d1a8146e-540e-45dc-b7f8-ef33d3635ddf 00:08:25.851 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.851 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:26.421 [2024-10-08 18:18:54.779991] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:26.421 [2024-10-08 18:18:54.780172] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:26.421 true 00:08:26.421 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:26.421 18:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a860f52b-0511-495b-89d7-b35717544fac 00:08:26.990 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:26.991 18:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:27.927 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d1a8146e-540e-45dc-b7f8-ef33d3635ddf 00:08:27.927 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:28.496 [2024-10-08 18:18:56.939237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.496 18:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.065 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1098506 00:08:29.065 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:29.065 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:29.065 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1098506 /var/tmp/bdevperf.sock 00:08:29.065 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1098506 ']' 00:08:29.065 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:29.066 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.066 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:29.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:29.066 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.066 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:29.066 [2024-10-08 18:18:57.425720] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:08:29.066 [2024-10-08 18:18:57.425808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1098506 ] 00:08:29.066 [2024-10-08 18:18:57.563398] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.327 [2024-10-08 18:18:57.778878] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.587 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.587 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:29.587 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:30.527 Nvme0n1 00:08:30.527 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:30.895 [ 00:08:30.895 { 00:08:30.895 "name": "Nvme0n1", 00:08:30.895 "aliases": [ 00:08:30.895 "d1a8146e-540e-45dc-b7f8-ef33d3635ddf" 00:08:30.895 ], 00:08:30.895 "product_name": "NVMe disk", 00:08:30.895 "block_size": 4096, 00:08:30.895 "num_blocks": 38912, 00:08:30.895 "uuid": "d1a8146e-540e-45dc-b7f8-ef33d3635ddf", 00:08:30.895 "numa_id": 1, 00:08:30.895 "assigned_rate_limits": { 00:08:30.895 "rw_ios_per_sec": 0, 00:08:30.895 "rw_mbytes_per_sec": 0, 00:08:30.895 "r_mbytes_per_sec": 0, 00:08:30.895 "w_mbytes_per_sec": 0 00:08:30.895 }, 00:08:30.895 "claimed": false, 00:08:30.895 "zoned": false, 00:08:30.895 "supported_io_types": { 00:08:30.895 "read": true, 00:08:30.895 "write": true, 00:08:30.895 "unmap": true, 00:08:30.895 "flush": true, 00:08:30.895 "reset": true, 00:08:30.895 "nvme_admin": true, 00:08:30.895 "nvme_io": true, 00:08:30.895 "nvme_io_md": false, 00:08:30.895 "write_zeroes": true, 00:08:30.895 "zcopy": false, 00:08:30.895 "get_zone_info": false, 00:08:30.895 "zone_management": false, 00:08:30.895 "zone_append": false, 00:08:30.895 "compare": true, 00:08:30.895 "compare_and_write": true, 00:08:30.895 "abort": true, 00:08:30.895 "seek_hole": false, 00:08:30.895 "seek_data": false, 00:08:30.895 "copy": true, 00:08:30.895 "nvme_iov_md": false 00:08:30.895 }, 00:08:30.895 "memory_domains": [ 00:08:30.895 { 00:08:30.895 "dma_device_id": "system", 00:08:30.895 "dma_device_type": 1 00:08:30.895 } 00:08:30.895 ], 00:08:30.895 "driver_specific": { 00:08:30.895 "nvme": [ 00:08:30.895 { 00:08:30.895 "trid": { 00:08:30.895 "trtype": "TCP", 00:08:30.895 "adrfam": "IPv4", 00:08:30.895 "traddr": "10.0.0.2", 00:08:30.895 "trsvcid": "4420", 00:08:30.895 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:30.895 }, 00:08:30.895 "ctrlr_data": { 00:08:30.895 "cntlid": 1, 00:08:30.895 "vendor_id": "0x8086", 00:08:30.895 "model_number": "SPDK bdev Controller", 00:08:30.895 "serial_number": "SPDK0", 00:08:30.895 "firmware_revision": "25.01", 00:08:30.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:30.895 "oacs": { 00:08:30.895 "security": 0, 00:08:30.895 "format": 0, 00:08:30.895 "firmware": 0, 00:08:30.895 "ns_manage": 0 00:08:30.895 }, 00:08:30.895 "multi_ctrlr": true, 00:08:30.895 "ana_reporting": false 00:08:30.895 }, 00:08:30.895 "vs": { 00:08:30.895 "nvme_version": "1.3" 00:08:30.895 }, 00:08:30.895 "ns_data": { 00:08:30.895 "id": 1, 00:08:30.895 "can_share": true 00:08:30.895 } 00:08:30.895 } 00:08:30.895 ], 00:08:30.895 "mp_policy": "active_passive" 00:08:30.895 } 00:08:30.895 } 00:08:30.895 ] 00:08:31.174 18:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1098774 00:08:31.174 18:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:31.174 18:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:31.174 Running I/O for 10 seconds... 00:08:32.557 Latency(us) 00:08:32.557 [2024-10-08T16:19:01.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.557 Nvme0n1 : 1.00 6478.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:32.557 [2024-10-08T16:19:01.094Z] =================================================================================================================== 00:08:32.557 [2024-10-08T16:19:01.094Z] Total : 6478.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:32.557 00:08:33.127 18:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a860f52b-0511-495b-89d7-b35717544fac 00:08:33.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.387 Nvme0n1 : 2.00 6414.00 25.05 0.00 0.00 0.00 0.00 0.00 00:08:33.387 [2024-10-08T16:19:01.924Z] =================================================================================================================== 00:08:33.387 [2024-10-08T16:19:01.924Z] Total : 6414.00 25.05 0.00 0.00 0.00 0.00 0.00 00:08:33.387 00:08:33.648 true 00:08:33.648 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:33.648 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a860f52b-0511-495b-89d7-b35717544fac 00:08:34.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.219 Nvme0n1 : 3.00 6392.67 24.97 0.00 0.00 0.00 0.00 0.00 00:08:34.219 [2024-10-08T16:19:02.756Z] =================================================================================================================== 00:08:34.219 [2024-10-08T16:19:02.756Z] Total : 6392.67 24.97 0.00 0.00 0.00 0.00 0.00 00:08:34.219 00:08:34.219 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:34.219 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:34.219 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1098774 00:08:35.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.600 Nvme0n1 : 4.00 6413.75 25.05 0.00 0.00 0.00 0.00 0.00 00:08:35.600 [2024-10-08T16:19:04.137Z] =================================================================================================================== 00:08:35.600 [2024-10-08T16:19:04.137Z] Total : 6413.75 25.05 0.00 0.00 0.00 0.00 0.00 00:08:35.600 00:08:36.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.170 Nvme0n1 : 5.00 6426.40 25.10 0.00 0.00 0.00 0.00 0.00 00:08:36.170 [2024-10-08T16:19:04.707Z] =================================================================================================================== 00:08:36.170 [2024-10-08T16:19:04.707Z] Total : 6426.40 25.10 0.00 0.00 0.00 0.00 0.00 00:08:36.170 00:08:37.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.551 Nvme0n1 : 6.00 6413.67 25.05 0.00 0.00 0.00 0.00 0.00 00:08:37.551 [2024-10-08T16:19:06.088Z] =================================================================================================================== 00:08:37.551 [2024-10-08T16:19:06.088Z] Total : 6413.67 25.05 0.00 0.00 0.00 0.00 0.00 00:08:37.551 00:08:38.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.493 Nvme0n1 : 7.00 6404.57 25.02 0.00 0.00 0.00 0.00 0.00 00:08:38.493 [2024-10-08T16:19:07.030Z] =================================================================================================================== 00:08:38.493 [2024-10-08T16:19:07.030Z] Total : 6404.57 25.02 0.00 0.00 0.00 0.00 0.00 00:08:38.493 00:08:39.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.433 Nvme0n1 : 8.00 6413.62 25.05 0.00 0.00 0.00 0.00 0.00 00:08:39.433 [2024-10-08T16:19:07.970Z] =================================================================================================================== 00:08:39.433 [2024-10-08T16:19:07.970Z] Total : 6413.62 25.05 0.00 0.00 0.00 0.00 0.00 00:08:39.433 00:08:40.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.376 Nvme0n1 : 9.00 6406.56 25.03 0.00 0.00 0.00 0.00 0.00 00:08:40.376 [2024-10-08T16:19:08.913Z] =================================================================================================================== 00:08:40.376 [2024-10-08T16:19:08.913Z] Total : 6406.56 25.03 0.00 0.00 0.00 0.00 0.00 00:08:40.376 00:08:41.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.314 Nvme0n1 : 10.00 6413.60 25.05 0.00 0.00 0.00 0.00 0.00 00:08:41.314 [2024-10-08T16:19:09.851Z] =================================================================================================================== 00:08:41.314 [2024-10-08T16:19:09.851Z] Total : 6413.60 25.05 0.00 0.00 0.00 0.00 0.00 00:08:41.314 00:08:41.314 00:08:41.314 Latency(us) 00:08:41.314 [2024-10-08T16:19:09.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.314 Nvme0n1 : 10.01 6418.34 25.07 0.00 0.00 19928.78 7767.23 39807.05 00:08:41.314 [2024-10-08T16:19:09.851Z] =================================================================================================================== 00:08:41.314 [2024-10-08T16:19:09.851Z] Total : 6418.34 25.07 0.00 0.00 19928.78 7767.23 39807.05 00:08:41.314 { 00:08:41.314 "results": [ 00:08:41.314 { 00:08:41.314 "job": "Nvme0n1", 00:08:41.314 "core_mask": "0x2", 00:08:41.314 "workload": "randwrite", 00:08:41.314 "status": "finished", 00:08:41.314 "queue_depth": 128, 00:08:41.314 "io_size": 4096, 00:08:41.314 "runtime": 10.012563, 00:08:41.314 "iops": 6418.336643674552, 00:08:41.314 "mibps": 25.071627514353718, 00:08:41.314 "io_failed": 0, 00:08:41.314 "io_timeout": 0, 00:08:41.314 "avg_latency_us": 19928.783185655466, 00:08:41.314 "min_latency_us": 7767.22962962963, 00:08:41.314 "max_latency_us": 39807.05185185185 00:08:41.314 } 00:08:41.314 ], 00:08:41.314 "core_count": 1 00:08:41.314 } 00:08:41.314 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1098506 00:08:41.314 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1098506 ']' 00:08:41.314 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1098506 00:08:41.314 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:41.314 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.314 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1098506 00:08:41.314 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:41.314 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:41.314 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1098506' 00:08:41.314 killing process with pid 1098506 00:08:41.314 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1098506 00:08:41.314 Received shutdown signal, test time was about 10.000000 seconds 00:08:41.314 00:08:41.314 Latency(us) 00:08:41.314 [2024-10-08T16:19:09.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.314 [2024-10-08T16:19:09.851Z] =================================================================================================================== 00:08:41.314 [2024-10-08T16:19:09.851Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:41.314 18:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1098506 00:08:41.884 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.452 18:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:43.022 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a860f52b-0511-495b-89d7-b35717544fac 00:08:43.022 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:43.591 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:43.591 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:43.591 18:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:43.849 [2024-10-08 18:19:12.200579] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:43.849 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a860f52b-0511-495b-89d7-b35717544fac 00:08:43.849 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:43.849 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a860f52b-0511-495b-89d7-b35717544fac 00:08:43.849 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.849 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.850 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.850 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.850 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.850 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.850 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.850 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:43.850 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a860f52b-0511-495b-89d7-b35717544fac 00:08:44.418 request: 00:08:44.418 { 00:08:44.418 "uuid": "a860f52b-0511-495b-89d7-b35717544fac", 00:08:44.418 "method": "bdev_lvol_get_lvstores", 00:08:44.418 "req_id": 1 00:08:44.418 } 00:08:44.418 Got JSON-RPC error response 00:08:44.418 response: 00:08:44.418 { 00:08:44.418 "code": -19, 00:08:44.418 "message": "No such device" 00:08:44.418 } 00:08:44.418 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:44.418 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.418 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:44.418 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.418 18:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:44.678 aio_bdev 00:08:44.678 18:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d1a8146e-540e-45dc-b7f8-ef33d3635ddf 00:08:44.678 18:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=d1a8146e-540e-45dc-b7f8-ef33d3635ddf 00:08:44.678 18:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:44.678 18:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:44.678 18:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:44.678 18:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:44.678 18:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.246 18:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d1a8146e-540e-45dc-b7f8-ef33d3635ddf -t 2000 00:08:45.817 [ 00:08:45.817 { 00:08:45.817 "name": "d1a8146e-540e-45dc-b7f8-ef33d3635ddf", 00:08:45.817 "aliases": [ 00:08:45.817 "lvs/lvol" 00:08:45.817 ], 00:08:45.817 "product_name": "Logical Volume", 00:08:45.817 "block_size": 4096, 00:08:45.817 "num_blocks": 38912, 00:08:45.817 "uuid": "d1a8146e-540e-45dc-b7f8-ef33d3635ddf", 00:08:45.817 "assigned_rate_limits": { 00:08:45.817 "rw_ios_per_sec": 0, 00:08:45.817 "rw_mbytes_per_sec": 0, 00:08:45.817 "r_mbytes_per_sec": 0, 00:08:45.817 "w_mbytes_per_sec": 0 00:08:45.817 }, 00:08:45.817 "claimed": false, 00:08:45.817 "zoned": false, 00:08:45.817 "supported_io_types": { 00:08:45.817 "read": true, 00:08:45.817 "write": true, 00:08:45.817 "unmap": true, 00:08:45.817 "flush": false, 00:08:45.817 "reset": true, 00:08:45.817 "nvme_admin": false, 00:08:45.817 "nvme_io": false, 00:08:45.817 "nvme_io_md": false, 00:08:45.817 "write_zeroes": true, 00:08:45.817 "zcopy": false, 00:08:45.817 "get_zone_info": false, 00:08:45.817 "zone_management": false, 00:08:45.817 "zone_append": false, 00:08:45.817 "compare": false, 00:08:45.817 "compare_and_write": false, 00:08:45.817 "abort": false, 00:08:45.817 "seek_hole": true, 00:08:45.817 "seek_data": true, 00:08:45.817 "copy": false, 00:08:45.817 "nvme_iov_md": false 00:08:45.817 }, 00:08:45.817 "driver_specific": { 00:08:45.817 "lvol": { 00:08:45.817 "lvol_store_uuid": "a860f52b-0511-495b-89d7-b35717544fac", 00:08:45.817 "base_bdev": "aio_bdev", 00:08:45.817 "thin_provision": false, 00:08:45.817 "num_allocated_clusters": 38, 00:08:45.817 "snapshot": false, 00:08:45.817 "clone": false, 00:08:45.817 "esnap_clone": false 00:08:45.817 } 00:08:45.817 } 00:08:45.817 } 00:08:45.817 ] 00:08:45.817 18:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:45.817 18:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:45.817 18:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a860f52b-0511-495b-89d7-b35717544fac 00:08:46.386 18:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:46.646 18:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a860f52b-0511-495b-89d7-b35717544fac 00:08:46.646 18:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:47.217 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:47.217 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d1a8146e-540e-45dc-b7f8-ef33d3635ddf 00:08:47.785 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a860f52b-0511-495b-89d7-b35717544fac 00:08:48.355 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.295 00:08:49.295 real 0m25.273s 00:08:49.295 user 0m25.301s 00:08:49.295 sys 0m2.948s 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:49.295 ************************************ 00:08:49.295 END TEST lvs_grow_clean 00:08:49.295 ************************************ 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.295 ************************************ 00:08:49.295 START TEST lvs_grow_dirty 00:08:49.295 ************************************ 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.295 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:49.865 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:49.865 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:50.435 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:08:50.435 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:50.435 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:08:51.375 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:51.375 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:51.375 18:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a lvol 150 00:08:51.944 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1347e8ef-0f35-4c3e-8e59-1f1e205e406d 00:08:51.944 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.944 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:52.203 [2024-10-08 18:19:20.624840] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:52.203 [2024-10-08 18:19:20.625006] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:52.203 true 00:08:52.203 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:08:52.203 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:52.772 18:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:52.772 18:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:53.352 18:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1347e8ef-0f35-4c3e-8e59-1f1e205e406d 00:08:53.922 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:54.861 [2024-10-08 18:19:23.129509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.861 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.119 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1101514 00:08:55.119 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:55.119 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1101514 /var/tmp/bdevperf.sock 00:08:55.119 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1101514 ']' 00:08:55.119 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:55.119 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:55.119 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.119 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:55.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:55.119 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.119 18:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:55.379 [2024-10-08 18:19:23.699715] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:08:55.379 [2024-10-08 18:19:23.699806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1101514 ] 00:08:55.379 [2024-10-08 18:19:23.808339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.639 [2024-10-08 18:19:24.029303] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.576 18:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.576 18:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:56.576 18:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:56.836 Nvme0n1 00:08:56.836 18:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:57.406 [ 00:08:57.406 { 00:08:57.406 "name": "Nvme0n1", 00:08:57.406 "aliases": [ 00:08:57.406 "1347e8ef-0f35-4c3e-8e59-1f1e205e406d" 00:08:57.406 ], 00:08:57.406 "product_name": "NVMe disk", 00:08:57.406 "block_size": 4096, 00:08:57.406 "num_blocks": 38912, 00:08:57.406 "uuid": "1347e8ef-0f35-4c3e-8e59-1f1e205e406d", 00:08:57.406 "numa_id": 1, 00:08:57.406 "assigned_rate_limits": { 00:08:57.406 "rw_ios_per_sec": 0, 00:08:57.406 "rw_mbytes_per_sec": 0, 00:08:57.406 "r_mbytes_per_sec": 0, 00:08:57.406 "w_mbytes_per_sec": 0 00:08:57.406 }, 00:08:57.406 "claimed": false, 00:08:57.406 "zoned": false, 00:08:57.406 "supported_io_types": { 00:08:57.406 "read": true, 00:08:57.406 "write": true, 00:08:57.406 "unmap": true, 00:08:57.406 "flush": true, 00:08:57.406 "reset": true, 00:08:57.406 "nvme_admin": true, 00:08:57.406 "nvme_io": true, 00:08:57.406 "nvme_io_md": false, 00:08:57.406 "write_zeroes": true, 00:08:57.406 "zcopy": false, 00:08:57.406 "get_zone_info": false, 00:08:57.406 "zone_management": false, 00:08:57.406 "zone_append": false, 00:08:57.406 "compare": true, 00:08:57.406 "compare_and_write": true, 00:08:57.406 "abort": true, 00:08:57.406 "seek_hole": false, 00:08:57.406 "seek_data": false, 00:08:57.406 "copy": true, 00:08:57.406 "nvme_iov_md": false 00:08:57.406 }, 00:08:57.406 "memory_domains": [ 00:08:57.406 { 00:08:57.406 "dma_device_id": "system", 00:08:57.406 "dma_device_type": 1 00:08:57.406 } 00:08:57.406 ], 00:08:57.406 "driver_specific": { 00:08:57.406 "nvme": [ 00:08:57.406 { 00:08:57.406 "trid": { 00:08:57.406 "trtype": "TCP", 00:08:57.406 "adrfam": "IPv4", 00:08:57.406 "traddr": "10.0.0.2", 00:08:57.406 "trsvcid": "4420", 00:08:57.406 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:57.406 }, 00:08:57.406 "ctrlr_data": { 00:08:57.406 "cntlid": 1, 00:08:57.406 "vendor_id": "0x8086", 00:08:57.406 "model_number": "SPDK bdev Controller", 00:08:57.406 "serial_number": "SPDK0", 00:08:57.406 "firmware_revision": "25.01", 00:08:57.406 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:57.406 "oacs": { 00:08:57.406 "security": 0, 00:08:57.406 "format": 0, 00:08:57.406 "firmware": 0, 00:08:57.406 "ns_manage": 0 00:08:57.406 }, 00:08:57.406 "multi_ctrlr": true, 00:08:57.406 "ana_reporting": false 00:08:57.406 }, 00:08:57.406 "vs": { 00:08:57.406 "nvme_version": "1.3" 00:08:57.406 }, 00:08:57.406 "ns_data": { 00:08:57.406 "id": 1, 00:08:57.406 "can_share": true 00:08:57.406 } 00:08:57.406 } 00:08:57.406 ], 00:08:57.406 "mp_policy": "active_passive" 00:08:57.406 } 00:08:57.406 } 00:08:57.406 ] 00:08:57.406 18:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1101777 00:08:57.406 18:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:57.406 18:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:57.406 Running I/O for 10 seconds... 00:08:58.786 Latency(us) 00:08:58.786 [2024-10-08T16:19:27.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.786 Nvme0n1 : 1.00 8067.00 31.51 0.00 0.00 0.00 0.00 0.00 00:08:58.786 [2024-10-08T16:19:27.323Z] =================================================================================================================== 00:08:58.786 [2024-10-08T16:19:27.323Z] Total : 8067.00 31.51 0.00 0.00 0.00 0.00 0.00 00:08:58.786 00:08:59.355 18:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:08:59.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.616 Nvme0n1 : 2.00 8161.00 31.88 0.00 0.00 0.00 0.00 0.00 00:08:59.616 [2024-10-08T16:19:28.153Z] =================================================================================================================== 00:08:59.616 [2024-10-08T16:19:28.153Z] Total : 8161.00 31.88 0.00 0.00 0.00 0.00 0.00 00:08:59.616 00:08:59.616 true 00:08:59.616 18:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:08:59.616 18:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:00.186 18:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:00.186 18:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:00.186 18:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1101777 00:09:00.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.445 Nvme0n1 : 3.00 8234.67 32.17 0.00 0.00 0.00 0.00 0.00 00:09:00.445 [2024-10-08T16:19:28.982Z] =================================================================================================================== 00:09:00.445 [2024-10-08T16:19:28.982Z] Total : 8234.67 32.17 0.00 0.00 0.00 0.00 0.00 00:09:00.445 00:09:01.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.896 Nvme0n1 : 4.00 8049.25 31.44 0.00 0.00 0.00 0.00 0.00 00:09:01.896 [2024-10-08T16:19:30.433Z] =================================================================================================================== 00:09:01.896 [2024-10-08T16:19:30.433Z] Total : 8049.25 31.44 0.00 0.00 0.00 0.00 0.00 00:09:01.896 00:09:02.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.494 Nvme0n1 : 5.00 8001.80 31.26 0.00 0.00 0.00 0.00 0.00 00:09:02.494 [2024-10-08T16:19:31.031Z] =================================================================================================================== 00:09:02.494 [2024-10-08T16:19:31.031Z] Total : 8001.80 31.26 0.00 0.00 0.00 0.00 0.00 00:09:02.494 00:09:03.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.433 Nvme0n1 : 6.00 8128.67 31.75 0.00 0.00 0.00 0.00 0.00 00:09:03.433 [2024-10-08T16:19:31.970Z] =================================================================================================================== 00:09:03.433 [2024-10-08T16:19:31.970Z] Total : 8128.67 31.75 0.00 0.00 0.00 0.00 0.00 00:09:03.433 00:09:04.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.813 Nvme0n1 : 7.00 8192.14 32.00 0.00 0.00 0.00 0.00 0.00 00:09:04.813 [2024-10-08T16:19:33.350Z] =================================================================================================================== 00:09:04.813 [2024-10-08T16:19:33.350Z] Total : 8192.14 32.00 0.00 0.00 0.00 0.00 0.00 00:09:04.813 00:09:05.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.751 Nvme0n1 : 8.00 8255.62 32.25 0.00 0.00 0.00 0.00 0.00 00:09:05.751 [2024-10-08T16:19:34.288Z] =================================================================================================================== 00:09:05.751 [2024-10-08T16:19:34.288Z] Total : 8255.62 32.25 0.00 0.00 0.00 0.00 0.00 00:09:05.751 00:09:06.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.691 Nvme0n1 : 9.00 8058.00 31.48 0.00 0.00 0.00 0.00 0.00 00:09:06.691 [2024-10-08T16:19:35.228Z] =================================================================================================================== 00:09:06.691 [2024-10-08T16:19:35.228Z] Total : 8058.00 31.48 0.00 0.00 0.00 0.00 0.00 00:09:06.691 00:09:07.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.629 Nvme0n1 : 10.00 8116.70 31.71 0.00 0.00 0.00 0.00 0.00 00:09:07.629 [2024-10-08T16:19:36.166Z] =================================================================================================================== 00:09:07.629 [2024-10-08T16:19:36.166Z] Total : 8116.70 31.71 0.00 0.00 0.00 0.00 0.00 00:09:07.629 00:09:07.629 00:09:07.629 Latency(us) 00:09:07.629 [2024-10-08T16:19:36.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.629 Nvme0n1 : 10.01 8112.84 31.69 0.00 0.00 15762.01 4636.07 38253.61 00:09:07.629 [2024-10-08T16:19:36.166Z] =================================================================================================================== 00:09:07.629 [2024-10-08T16:19:36.166Z] Total : 8112.84 31.69 0.00 0.00 15762.01 4636.07 38253.61 00:09:07.629 { 00:09:07.629 "results": [ 00:09:07.629 { 00:09:07.629 "job": "Nvme0n1", 00:09:07.629 "core_mask": "0x2", 00:09:07.629 "workload": "randwrite", 00:09:07.629 "status": "finished", 00:09:07.629 "queue_depth": 128, 00:09:07.629 "io_size": 4096, 00:09:07.629 "runtime": 10.012768, 00:09:07.629 "iops": 8112.84152394223, 00:09:07.629 "mibps": 31.690787202899337, 00:09:07.629 "io_failed": 0, 00:09:07.629 "io_timeout": 0, 00:09:07.629 "avg_latency_us": 15762.006597655365, 00:09:07.629 "min_latency_us": 4636.065185185185, 00:09:07.629 "max_latency_us": 38253.60592592593 00:09:07.629 } 00:09:07.629 ], 00:09:07.629 "core_count": 1 00:09:07.629 } 00:09:07.629 18:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1101514 00:09:07.629 18:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1101514 ']' 00:09:07.629 18:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1101514 00:09:07.629 18:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:07.629 18:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.629 18:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1101514 00:09:07.629 18:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:07.629 18:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:07.629 18:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1101514' 00:09:07.629 killing process with pid 1101514 00:09:07.629 18:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1101514 00:09:07.629 Received shutdown signal, test time was about 10.000000 seconds 00:09:07.629 00:09:07.629 Latency(us) 00:09:07.629 [2024-10-08T16:19:36.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.629 [2024-10-08T16:19:36.166Z] =================================================================================================================== 00:09:07.629 [2024-10-08T16:19:36.166Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:07.629 18:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1101514 00:09:08.196 18:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:08.460 18:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:09.398 18:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:09:09.398 18:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:09.659 18:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:09.659 18:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:09.659 18:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1097678 00:09:09.659 18:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1097678 00:09:09.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1097678 Killed "${NVMF_APP[@]}" "$@" 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1103248 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1103248 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1103248 ']' 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.659 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:09.659 [2024-10-08 18:19:38.097086] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:09:09.659 [2024-10-08 18:19:38.097198] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.920 [2024-10-08 18:19:38.224837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.920 [2024-10-08 18:19:38.446272] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.920 [2024-10-08 18:19:38.446390] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.920 [2024-10-08 18:19:38.446426] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.920 [2024-10-08 18:19:38.446458] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.920 [2024-10-08 18:19:38.446484] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.920 [2024-10-08 18:19:38.447869] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.180 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.180 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:10.180 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:10.180 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:10.180 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:10.180 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.180 18:19:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:10.747 [2024-10-08 18:19:39.050735] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:10.747 [2024-10-08 18:19:39.050882] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:10.747 [2024-10-08 18:19:39.050942] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:10.747 18:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:10.747 18:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1347e8ef-0f35-4c3e-8e59-1f1e205e406d 00:09:10.747 18:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1347e8ef-0f35-4c3e-8e59-1f1e205e406d 00:09:10.747 18:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.747 18:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:10.747 18:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.747 18:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.747 18:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:11.315 18:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1347e8ef-0f35-4c3e-8e59-1f1e205e406d -t 2000 00:09:11.881 [ 00:09:11.881 { 00:09:11.881 "name": "1347e8ef-0f35-4c3e-8e59-1f1e205e406d", 00:09:11.881 "aliases": [ 00:09:11.881 "lvs/lvol" 00:09:11.881 ], 00:09:11.881 "product_name": "Logical Volume", 00:09:11.881 "block_size": 4096, 00:09:11.881 "num_blocks": 38912, 00:09:11.881 "uuid": "1347e8ef-0f35-4c3e-8e59-1f1e205e406d", 00:09:11.881 "assigned_rate_limits": { 00:09:11.881 "rw_ios_per_sec": 0, 00:09:11.881 "rw_mbytes_per_sec": 0, 00:09:11.881 "r_mbytes_per_sec": 0, 00:09:11.881 "w_mbytes_per_sec": 0 00:09:11.881 }, 00:09:11.881 "claimed": false, 00:09:11.881 "zoned": false, 00:09:11.881 "supported_io_types": { 00:09:11.881 "read": true, 00:09:11.881 "write": true, 00:09:11.882 "unmap": true, 00:09:11.882 "flush": false, 00:09:11.882 "reset": true, 00:09:11.882 "nvme_admin": false, 00:09:11.882 "nvme_io": false, 00:09:11.882 "nvme_io_md": false, 00:09:11.882 "write_zeroes": true, 00:09:11.882 "zcopy": false, 00:09:11.882 "get_zone_info": false, 00:09:11.882 "zone_management": false, 00:09:11.882 "zone_append": false, 00:09:11.882 "compare": false, 00:09:11.882 "compare_and_write": false, 00:09:11.882 "abort": false, 00:09:11.882 "seek_hole": true, 00:09:11.882 "seek_data": true, 00:09:11.882 "copy": false, 00:09:11.882 "nvme_iov_md": false 00:09:11.882 }, 00:09:11.882 "driver_specific": { 00:09:11.882 "lvol": { 00:09:11.882 "lvol_store_uuid": "e4d1da09-21f3-48bc-81e2-d1f5c7e9119a", 00:09:11.882 "base_bdev": "aio_bdev", 00:09:11.882 "thin_provision": false, 00:09:11.882 "num_allocated_clusters": 38, 00:09:11.882 "snapshot": false, 00:09:11.882 "clone": false, 00:09:11.882 "esnap_clone": false 00:09:11.882 } 00:09:11.882 } 00:09:11.882 } 00:09:11.882 ] 00:09:11.882 18:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:11.882 18:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:09:11.882 18:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:12.140 18:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:12.141 18:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:09:12.141 18:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:12.708 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:12.708 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:13.275 [2024-10-08 18:19:41.559927] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:13.276 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:09:13.276 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:13.276 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:09:13.276 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.276 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.276 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.276 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.276 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.276 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.276 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.276 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:13.276 18:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:09:13.844 request: 00:09:13.844 { 00:09:13.844 "uuid": "e4d1da09-21f3-48bc-81e2-d1f5c7e9119a", 00:09:13.844 "method": "bdev_lvol_get_lvstores", 00:09:13.844 "req_id": 1 00:09:13.844 } 00:09:13.844 Got JSON-RPC error response 00:09:13.844 response: 00:09:13.844 { 00:09:13.844 "code": -19, 00:09:13.844 "message": "No such device" 00:09:13.844 } 00:09:13.844 18:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:13.844 18:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:13.844 18:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:13.844 18:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:13.844 18:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.411 aio_bdev 00:09:14.411 18:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1347e8ef-0f35-4c3e-8e59-1f1e205e406d 00:09:14.411 18:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1347e8ef-0f35-4c3e-8e59-1f1e205e406d 00:09:14.411 18:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.411 18:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:14.411 18:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.411 18:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.411 18:19:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:14.979 18:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1347e8ef-0f35-4c3e-8e59-1f1e205e406d -t 2000 00:09:15.236 [ 00:09:15.236 { 00:09:15.236 "name": "1347e8ef-0f35-4c3e-8e59-1f1e205e406d", 00:09:15.236 "aliases": [ 00:09:15.236 "lvs/lvol" 00:09:15.236 ], 00:09:15.236 "product_name": "Logical Volume", 00:09:15.236 "block_size": 4096, 00:09:15.236 "num_blocks": 38912, 00:09:15.236 "uuid": "1347e8ef-0f35-4c3e-8e59-1f1e205e406d", 00:09:15.236 "assigned_rate_limits": { 00:09:15.236 "rw_ios_per_sec": 0, 00:09:15.236 "rw_mbytes_per_sec": 0, 00:09:15.236 "r_mbytes_per_sec": 0, 00:09:15.236 "w_mbytes_per_sec": 0 00:09:15.236 }, 00:09:15.236 "claimed": false, 00:09:15.236 "zoned": false, 00:09:15.236 "supported_io_types": { 00:09:15.236 "read": true, 00:09:15.236 "write": true, 00:09:15.236 "unmap": true, 00:09:15.236 "flush": false, 00:09:15.236 "reset": true, 00:09:15.236 "nvme_admin": false, 00:09:15.236 "nvme_io": false, 00:09:15.236 "nvme_io_md": false, 00:09:15.236 "write_zeroes": true, 00:09:15.236 "zcopy": false, 00:09:15.236 "get_zone_info": false, 00:09:15.236 "zone_management": false, 00:09:15.236 "zone_append": false, 00:09:15.236 "compare": false, 00:09:15.236 "compare_and_write": false, 00:09:15.236 "abort": false, 00:09:15.236 "seek_hole": true, 00:09:15.236 "seek_data": true, 00:09:15.236 "copy": false, 00:09:15.236 "nvme_iov_md": false 00:09:15.236 }, 00:09:15.236 "driver_specific": { 00:09:15.236 "lvol": { 00:09:15.236 "lvol_store_uuid": "e4d1da09-21f3-48bc-81e2-d1f5c7e9119a", 00:09:15.236 "base_bdev": "aio_bdev", 00:09:15.236 "thin_provision": false, 00:09:15.236 "num_allocated_clusters": 38, 00:09:15.236 "snapshot": false, 00:09:15.236 "clone": false, 00:09:15.236 "esnap_clone": false 00:09:15.236 } 00:09:15.236 } 00:09:15.236 } 00:09:15.236 ] 00:09:15.236 18:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:15.236 18:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:09:15.236 18:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:15.803 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:15.803 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:09:15.803 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:16.062 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:16.062 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1347e8ef-0f35-4c3e-8e59-1f1e205e406d 00:09:16.630 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e4d1da09-21f3-48bc-81e2-d1f5c7e9119a 00:09:17.197 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:17.457 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.457 00:09:17.457 real 0m28.271s 00:09:17.457 user 1m10.872s 00:09:17.457 sys 0m6.123s 00:09:17.457 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.457 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:17.457 ************************************ 00:09:17.457 END TEST lvs_grow_dirty 00:09:17.457 ************************************ 00:09:17.457 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:17.457 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:17.457 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:17.457 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:17.457 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:17.457 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:17.457 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:17.457 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:17.457 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:17.457 nvmf_trace.0 00:09:17.716 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:17.716 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:17.716 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:17.716 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:17.716 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.716 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:17.716 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.716 18:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.716 rmmod nvme_tcp 00:09:17.716 rmmod nvme_fabrics 00:09:17.716 rmmod nvme_keyring 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1103248 ']' 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1103248 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1103248 ']' 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1103248 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1103248 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1103248' 00:09:17.716 killing process with pid 1103248 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1103248 00:09:17.716 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1103248 00:09:18.287 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:18.287 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:18.287 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:18.287 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:18.287 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:09:18.287 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:18.287 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:09:18.287 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:18.287 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:18.287 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.287 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.287 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.196 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.196 00:09:20.196 real 1m1.102s 00:09:20.196 user 1m46.676s 00:09:20.196 sys 0m12.096s 00:09:20.196 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.196 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:20.196 ************************************ 00:09:20.196 END TEST nvmf_lvs_grow 00:09:20.196 ************************************ 00:09:20.196 18:19:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:20.196 18:19:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:20.196 18:19:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.196 18:19:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.196 ************************************ 00:09:20.196 START TEST nvmf_bdev_io_wait 00:09:20.196 ************************************ 00:09:20.196 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:20.456 * Looking for test storage... 00:09:20.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.456 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:20.715 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.715 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.715 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.715 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:20.715 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.715 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:20.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.715 --rc genhtml_branch_coverage=1 00:09:20.715 --rc genhtml_function_coverage=1 00:09:20.715 --rc genhtml_legend=1 00:09:20.715 --rc geninfo_all_blocks=1 00:09:20.715 --rc geninfo_unexecuted_blocks=1 00:09:20.715 00:09:20.715 ' 00:09:20.715 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:20.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.715 --rc genhtml_branch_coverage=1 00:09:20.715 --rc genhtml_function_coverage=1 00:09:20.715 --rc genhtml_legend=1 00:09:20.715 --rc geninfo_all_blocks=1 00:09:20.715 --rc geninfo_unexecuted_blocks=1 00:09:20.716 00:09:20.716 ' 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:20.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.716 --rc genhtml_branch_coverage=1 00:09:20.716 --rc genhtml_function_coverage=1 00:09:20.716 --rc genhtml_legend=1 00:09:20.716 --rc geninfo_all_blocks=1 00:09:20.716 --rc geninfo_unexecuted_blocks=1 00:09:20.716 00:09:20.716 ' 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:20.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.716 --rc genhtml_branch_coverage=1 00:09:20.716 --rc genhtml_function_coverage=1 00:09:20.716 --rc genhtml_legend=1 00:09:20.716 --rc geninfo_all_blocks=1 00:09:20.716 --rc geninfo_unexecuted_blocks=1 00:09:20.716 00:09:20.716 ' 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.716 18:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:20.716 18:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:24.010 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:24.010 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:24.010 Found net devices under 0000:84:00.0: cvl_0_0 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:24.010 Found net devices under 0000:84:00.1: cvl_0_1 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.010 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.011 18:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:09:24.011 00:09:24.011 --- 10.0.0.2 ping statistics --- 00:09:24.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.011 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:09:24.011 00:09:24.011 --- 10.0.0.1 ping statistics --- 00:09:24.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.011 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1106320 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1106320 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1106320 ']' 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.011 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.011 [2024-10-08 18:19:52.191491] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:09:24.011 [2024-10-08 18:19:52.191688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.011 [2024-10-08 18:19:52.357170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.269 [2024-10-08 18:19:52.583820] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.269 [2024-10-08 18:19:52.583926] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.269 [2024-10-08 18:19:52.583973] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.269 [2024-10-08 18:19:52.584006] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.269 [2024-10-08 18:19:52.584032] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.269 [2024-10-08 18:19:52.587648] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.270 [2024-10-08 18:19:52.587753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.270 [2024-10-08 18:19:52.587858] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.270 [2024-10-08 18:19:52.587862] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.270 [2024-10-08 18:19:52.798186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.270 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.528 Malloc0 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.528 [2024-10-08 18:19:52.866722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1106400 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1106403 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1106406 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:24.528 { 00:09:24.528 "params": { 00:09:24.528 "name": "Nvme$subsystem", 00:09:24.528 "trtype": "$TEST_TRANSPORT", 00:09:24.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.528 "adrfam": "ipv4", 00:09:24.528 "trsvcid": "$NVMF_PORT", 00:09:24.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.528 "hdgst": ${hdgst:-false}, 00:09:24.528 "ddgst": ${ddgst:-false} 00:09:24.528 }, 00:09:24.528 "method": "bdev_nvme_attach_controller" 00:09:24.528 } 00:09:24.528 EOF 00:09:24.528 )") 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1106409 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:24.528 { 00:09:24.528 "params": { 00:09:24.528 "name": "Nvme$subsystem", 00:09:24.528 "trtype": "$TEST_TRANSPORT", 00:09:24.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.528 "adrfam": "ipv4", 00:09:24.528 "trsvcid": "$NVMF_PORT", 00:09:24.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.528 "hdgst": ${hdgst:-false}, 00:09:24.528 "ddgst": ${ddgst:-false} 00:09:24.528 }, 00:09:24.528 "method": "bdev_nvme_attach_controller" 00:09:24.528 } 00:09:24.528 EOF 00:09:24.528 )") 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:24.528 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:24.529 { 00:09:24.529 "params": { 00:09:24.529 "name": "Nvme$subsystem", 00:09:24.529 "trtype": "$TEST_TRANSPORT", 00:09:24.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.529 "adrfam": "ipv4", 00:09:24.529 "trsvcid": "$NVMF_PORT", 00:09:24.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.529 "hdgst": ${hdgst:-false}, 00:09:24.529 "ddgst": ${ddgst:-false} 00:09:24.529 }, 00:09:24.529 "method": "bdev_nvme_attach_controller" 00:09:24.529 } 00:09:24.529 EOF 00:09:24.529 )") 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:24.529 { 00:09:24.529 "params": { 00:09:24.529 "name": "Nvme$subsystem", 00:09:24.529 "trtype": "$TEST_TRANSPORT", 00:09:24.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.529 "adrfam": "ipv4", 00:09:24.529 "trsvcid": "$NVMF_PORT", 00:09:24.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.529 "hdgst": ${hdgst:-false}, 00:09:24.529 "ddgst": ${ddgst:-false} 00:09:24.529 }, 00:09:24.529 "method": "bdev_nvme_attach_controller" 00:09:24.529 } 00:09:24.529 EOF 00:09:24.529 )") 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1106400 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:24.529 "params": { 00:09:24.529 "name": "Nvme1", 00:09:24.529 "trtype": "tcp", 00:09:24.529 "traddr": "10.0.0.2", 00:09:24.529 "adrfam": "ipv4", 00:09:24.529 "trsvcid": "4420", 00:09:24.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.529 "hdgst": false, 00:09:24.529 "ddgst": false 00:09:24.529 }, 00:09:24.529 "method": "bdev_nvme_attach_controller" 00:09:24.529 }' 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:24.529 "params": { 00:09:24.529 "name": "Nvme1", 00:09:24.529 "trtype": "tcp", 00:09:24.529 "traddr": "10.0.0.2", 00:09:24.529 "adrfam": "ipv4", 00:09:24.529 "trsvcid": "4420", 00:09:24.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.529 "hdgst": false, 00:09:24.529 "ddgst": false 00:09:24.529 }, 00:09:24.529 "method": "bdev_nvme_attach_controller" 00:09:24.529 }' 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:24.529 "params": { 00:09:24.529 "name": "Nvme1", 00:09:24.529 "trtype": "tcp", 00:09:24.529 "traddr": "10.0.0.2", 00:09:24.529 "adrfam": "ipv4", 00:09:24.529 "trsvcid": "4420", 00:09:24.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.529 "hdgst": false, 00:09:24.529 "ddgst": false 00:09:24.529 }, 00:09:24.529 "method": "bdev_nvme_attach_controller" 00:09:24.529 }' 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:24.529 18:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:24.529 "params": { 00:09:24.529 "name": "Nvme1", 00:09:24.529 "trtype": "tcp", 00:09:24.529 "traddr": "10.0.0.2", 00:09:24.529 "adrfam": "ipv4", 00:09:24.529 "trsvcid": "4420", 00:09:24.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.529 "hdgst": false, 00:09:24.529 "ddgst": false 00:09:24.529 }, 00:09:24.529 "method": "bdev_nvme_attach_controller" 00:09:24.529 }' 00:09:24.529 [2024-10-08 18:19:52.918887] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:09:24.529 [2024-10-08 18:19:52.918971] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:24.529 [2024-10-08 18:19:52.923640] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:09:24.529 [2024-10-08 18:19:52.923645] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:09:24.529 [2024-10-08 18:19:52.923643] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:09:24.529 [2024-10-08 18:19:52.923749] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-08 18:19:52.923750] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-08 18:19:52.923750] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:24.529 --proc-type=auto ] 00:09:24.529 --proc-type=auto ] 00:09:24.787 [2024-10-08 18:19:53.065711] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.787 [2024-10-08 18:19:53.158133] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:09:24.787 [2024-10-08 18:19:53.172114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.787 [2024-10-08 18:19:53.278302] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:09:24.787 [2024-10-08 18:19:53.312851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.045 [2024-10-08 18:19:53.414773] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:09:25.045 [2024-10-08 18:19:53.425060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.045 [2024-10-08 18:19:53.528523] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:09:25.302 Running I/O for 1 seconds... 00:09:25.302 Running I/O for 1 seconds... 00:09:25.560 Running I/O for 1 seconds... 00:09:25.560 Running I/O for 1 seconds... 00:09:26.493 11868.00 IOPS, 46.36 MiB/s [2024-10-08T16:19:55.030Z] 200864.00 IOPS, 784.62 MiB/s 00:09:26.493 Latency(us) 00:09:26.493 [2024-10-08T16:19:55.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.493 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:26.493 Nvme1n1 : 1.00 200491.27 783.17 0.00 0.00 635.02 292.79 1856.85 00:09:26.493 [2024-10-08T16:19:55.030Z] =================================================================================================================== 00:09:26.493 [2024-10-08T16:19:55.030Z] Total : 200491.27 783.17 0.00 0.00 635.02 292.79 1856.85 00:09:26.493 00:09:26.493 Latency(us) 00:09:26.493 [2024-10-08T16:19:55.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.493 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:26.493 Nvme1n1 : 1.01 11926.98 46.59 0.00 0.00 10694.09 4927.34 19806.44 00:09:26.493 [2024-10-08T16:19:55.030Z] =================================================================================================================== 00:09:26.493 [2024-10-08T16:19:55.030Z] Total : 11926.98 46.59 0.00 0.00 10694.09 4927.34 19806.44 00:09:26.493 8852.00 IOPS, 34.58 MiB/s 00:09:26.493 Latency(us) 00:09:26.493 [2024-10-08T16:19:55.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.493 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:26.493 Nvme1n1 : 1.01 8895.94 34.75 0.00 0.00 14315.70 8252.68 22719.15 00:09:26.493 [2024-10-08T16:19:55.030Z] =================================================================================================================== 00:09:26.493 [2024-10-08T16:19:55.030Z] Total : 8895.94 34.75 0.00 0.00 14315.70 8252.68 22719.15 00:09:26.751 8479.00 IOPS, 33.12 MiB/s 00:09:26.751 Latency(us) 00:09:26.751 [2024-10-08T16:19:55.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.751 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:26.751 Nvme1n1 : 1.01 8546.75 33.39 0.00 0.00 14914.04 6068.15 29127.11 00:09:26.751 [2024-10-08T16:19:55.288Z] =================================================================================================================== 00:09:26.751 [2024-10-08T16:19:55.288Z] Total : 8546.75 33.39 0.00 0.00 14914.04 6068.15 29127.11 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1106403 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1106406 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1106409 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:27.008 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.009 rmmod nvme_tcp 00:09:27.009 rmmod nvme_fabrics 00:09:27.009 rmmod nvme_keyring 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1106320 ']' 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1106320 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1106320 ']' 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1106320 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1106320 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.009 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1106320' 00:09:27.009 killing process with pid 1106320 00:09:27.268 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1106320 00:09:27.268 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1106320 00:09:27.529 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:27.529 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:27.529 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:27.529 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:27.529 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:09:27.529 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:27.529 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:09:27.529 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.529 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:27.529 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.529 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.529 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.070 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:30.070 00:09:30.070 real 0m9.296s 00:09:30.070 user 0m19.864s 00:09:30.070 sys 0m4.819s 00:09:30.070 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.070 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:30.070 ************************************ 00:09:30.070 END TEST nvmf_bdev_io_wait 00:09:30.070 ************************************ 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.070 ************************************ 00:09:30.070 START TEST nvmf_queue_depth 00:09:30.070 ************************************ 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:30.070 * Looking for test storage... 00:09:30.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:30.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.070 --rc genhtml_branch_coverage=1 00:09:30.070 --rc genhtml_function_coverage=1 00:09:30.070 --rc genhtml_legend=1 00:09:30.070 --rc geninfo_all_blocks=1 00:09:30.070 --rc geninfo_unexecuted_blocks=1 00:09:30.070 00:09:30.070 ' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:30.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.070 --rc genhtml_branch_coverage=1 00:09:30.070 --rc genhtml_function_coverage=1 00:09:30.070 --rc genhtml_legend=1 00:09:30.070 --rc geninfo_all_blocks=1 00:09:30.070 --rc geninfo_unexecuted_blocks=1 00:09:30.070 00:09:30.070 ' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:30.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.070 --rc genhtml_branch_coverage=1 00:09:30.070 --rc genhtml_function_coverage=1 00:09:30.070 --rc genhtml_legend=1 00:09:30.070 --rc geninfo_all_blocks=1 00:09:30.070 --rc geninfo_unexecuted_blocks=1 00:09:30.070 00:09:30.070 ' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:30.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.070 --rc genhtml_branch_coverage=1 00:09:30.070 --rc genhtml_function_coverage=1 00:09:30.070 --rc genhtml_legend=1 00:09:30.070 --rc geninfo_all_blocks=1 00:09:30.070 --rc geninfo_unexecuted_blocks=1 00:09:30.070 00:09:30.070 ' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.070 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.071 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:30.071 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:30.071 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:30.071 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:33.363 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:33.363 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:33.364 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:33.364 Found net devices under 0000:84:00.0: cvl_0_0 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:33.364 Found net devices under 0000:84:00.1: cvl_0_1 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:09:33.364 00:09:33.364 --- 10.0.0.2 ping statistics --- 00:09:33.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.364 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:09:33.364 00:09:33.364 --- 10.0.0.1 ping statistics --- 00:09:33.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.364 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1108858 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1108858 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1108858 ']' 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.364 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.364 [2024-10-08 18:20:01.635392] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:09:33.364 [2024-10-08 18:20:01.635564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.364 [2024-10-08 18:20:01.796056] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.625 [2024-10-08 18:20:02.027057] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.625 [2024-10-08 18:20:02.027168] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.625 [2024-10-08 18:20:02.027205] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.625 [2024-10-08 18:20:02.027238] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.625 [2024-10-08 18:20:02.027265] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.625 [2024-10-08 18:20:02.028621] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.886 [2024-10-08 18:20:02.283732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.886 Malloc0 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.886 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.887 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.887 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.887 [2024-10-08 18:20:02.374960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.887 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.887 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1109004 00:09:33.887 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:33.887 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:33.887 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1109004 /var/tmp/bdevperf.sock 00:09:33.887 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1109004 ']' 00:09:33.887 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:33.887 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.887 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:33.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:33.887 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.887 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.147 [2024-10-08 18:20:02.488457] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:09:34.147 [2024-10-08 18:20:02.488617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1109004 ] 00:09:34.147 [2024-10-08 18:20:02.636550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.408 [2024-10-08 18:20:02.860062] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.345 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.345 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:35.345 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:35.345 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.345 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.345 NVMe0n1 00:09:35.345 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.345 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:35.345 Running I/O for 10 seconds... 00:09:37.311 3087.00 IOPS, 12.06 MiB/s [2024-10-08T16:20:06.789Z] 3584.00 IOPS, 14.00 MiB/s [2024-10-08T16:20:08.171Z] 3754.67 IOPS, 14.67 MiB/s [2024-10-08T16:20:09.111Z] 3839.25 IOPS, 15.00 MiB/s [2024-10-08T16:20:10.051Z] 3880.40 IOPS, 15.16 MiB/s [2024-10-08T16:20:10.991Z] 3779.33 IOPS, 14.76 MiB/s [2024-10-08T16:20:11.932Z] 3891.14 IOPS, 15.20 MiB/s [2024-10-08T16:20:12.871Z] 3968.00 IOPS, 15.50 MiB/s [2024-10-08T16:20:13.809Z] 4088.11 IOPS, 15.97 MiB/s [2024-10-08T16:20:14.070Z] 4165.70 IOPS, 16.27 MiB/s 00:09:45.533 Latency(us) 00:09:45.533 [2024-10-08T16:20:14.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.533 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:45.533 Verification LBA range: start 0x0 length 0x4000 00:09:45.533 NVMe0n1 : 10.22 4177.82 16.32 0.00 0.00 242447.90 52428.80 163111.82 00:09:45.533 [2024-10-08T16:20:14.070Z] =================================================================================================================== 00:09:45.533 [2024-10-08T16:20:14.070Z] Total : 4177.82 16.32 0.00 0.00 242447.90 52428.80 163111.82 00:09:45.533 { 00:09:45.533 "results": [ 00:09:45.533 { 00:09:45.533 "job": "NVMe0n1", 00:09:45.533 "core_mask": "0x1", 00:09:45.533 "workload": "verify", 00:09:45.533 "status": "finished", 00:09:45.533 "verify_range": { 00:09:45.533 "start": 0, 00:09:45.533 "length": 16384 00:09:45.533 }, 00:09:45.533 "queue_depth": 1024, 00:09:45.533 "io_size": 4096, 00:09:45.533 "runtime": 10.224711, 00:09:45.533 "iops": 4177.8197936352435, 00:09:45.533 "mibps": 16.31960856888767, 00:09:45.533 "io_failed": 0, 00:09:45.533 "io_timeout": 0, 00:09:45.533 "avg_latency_us": 242447.90339895902, 00:09:45.533 "min_latency_us": 52428.8, 00:09:45.533 "max_latency_us": 163111.82222222222 00:09:45.533 } 00:09:45.533 ], 00:09:45.533 "core_count": 1 00:09:45.533 } 00:09:45.533 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1109004 00:09:45.533 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1109004 ']' 00:09:45.533 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1109004 00:09:45.533 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:45.533 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.533 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1109004 00:09:45.793 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.793 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.793 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1109004' 00:09:45.793 killing process with pid 1109004 00:09:45.793 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1109004 00:09:45.793 Received shutdown signal, test time was about 10.000000 seconds 00:09:45.793 00:09:45.793 Latency(us) 00:09:45.793 [2024-10-08T16:20:14.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.793 [2024-10-08T16:20:14.330Z] =================================================================================================================== 00:09:45.793 [2024-10-08T16:20:14.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:45.793 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1109004 00:09:46.052 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:46.052 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:46.052 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:46.052 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:46.052 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.052 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:46.052 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.052 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.052 rmmod nvme_tcp 00:09:46.052 rmmod nvme_fabrics 00:09:46.052 rmmod nvme_keyring 00:09:46.052 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1108858 ']' 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1108858 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1108858 ']' 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1108858 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1108858 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1108858' 00:09:46.311 killing process with pid 1108858 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1108858 00:09:46.311 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1108858 00:09:46.571 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:46.571 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:46.571 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:46.571 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:46.571 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:09:46.571 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:46.571 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:09:46.571 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.571 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:46.571 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.571 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.571 18:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:49.111 00:09:49.111 real 0m19.078s 00:09:49.111 user 0m25.964s 00:09:49.111 sys 0m4.651s 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.111 ************************************ 00:09:49.111 END TEST nvmf_queue_depth 00:09:49.111 ************************************ 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.111 ************************************ 00:09:49.111 START TEST nvmf_target_multipath 00:09:49.111 ************************************ 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:49.111 * Looking for test storage... 00:09:49.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:49.111 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:49.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.112 --rc genhtml_branch_coverage=1 00:09:49.112 --rc genhtml_function_coverage=1 00:09:49.112 --rc genhtml_legend=1 00:09:49.112 --rc geninfo_all_blocks=1 00:09:49.112 --rc geninfo_unexecuted_blocks=1 00:09:49.112 00:09:49.112 ' 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:49.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.112 --rc genhtml_branch_coverage=1 00:09:49.112 --rc genhtml_function_coverage=1 00:09:49.112 --rc genhtml_legend=1 00:09:49.112 --rc geninfo_all_blocks=1 00:09:49.112 --rc geninfo_unexecuted_blocks=1 00:09:49.112 00:09:49.112 ' 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:49.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.112 --rc genhtml_branch_coverage=1 00:09:49.112 --rc genhtml_function_coverage=1 00:09:49.112 --rc genhtml_legend=1 00:09:49.112 --rc geninfo_all_blocks=1 00:09:49.112 --rc geninfo_unexecuted_blocks=1 00:09:49.112 00:09:49.112 ' 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:49.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.112 --rc genhtml_branch_coverage=1 00:09:49.112 --rc genhtml_function_coverage=1 00:09:49.112 --rc genhtml_legend=1 00:09:49.112 --rc geninfo_all_blocks=1 00:09:49.112 --rc geninfo_unexecuted_blocks=1 00:09:49.112 00:09:49.112 ' 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:49.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:49.112 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:52.401 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:52.401 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:52.401 Found net devices under 0000:84:00.0: cvl_0_0 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.401 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:52.402 Found net devices under 0000:84:00.1: cvl_0_1 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:52.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:09:52.402 00:09:52.402 --- 10.0.0.2 ping statistics --- 00:09:52.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.402 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:09:52.402 00:09:52.402 --- 10.0.0.1 ping statistics --- 00:09:52.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.402 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:52.402 only one NIC for nvmf test 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.402 rmmod nvme_tcp 00:09:52.402 rmmod nvme_fabrics 00:09:52.402 rmmod nvme_keyring 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.402 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:54.308 00:09:54.308 real 0m5.343s 00:09:54.308 user 0m1.117s 00:09:54.308 sys 0m2.230s 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:54.308 ************************************ 00:09:54.308 END TEST nvmf_target_multipath 00:09:54.308 ************************************ 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.308 ************************************ 00:09:54.308 START TEST nvmf_zcopy 00:09:54.308 ************************************ 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:54.308 * Looking for test storage... 00:09:54.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:54.308 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:54.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.567 --rc genhtml_branch_coverage=1 00:09:54.567 --rc genhtml_function_coverage=1 00:09:54.567 --rc genhtml_legend=1 00:09:54.567 --rc geninfo_all_blocks=1 00:09:54.567 --rc geninfo_unexecuted_blocks=1 00:09:54.567 00:09:54.567 ' 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:54.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.567 --rc genhtml_branch_coverage=1 00:09:54.567 --rc genhtml_function_coverage=1 00:09:54.567 --rc genhtml_legend=1 00:09:54.567 --rc geninfo_all_blocks=1 00:09:54.567 --rc geninfo_unexecuted_blocks=1 00:09:54.567 00:09:54.567 ' 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:54.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.567 --rc genhtml_branch_coverage=1 00:09:54.567 --rc genhtml_function_coverage=1 00:09:54.567 --rc genhtml_legend=1 00:09:54.567 --rc geninfo_all_blocks=1 00:09:54.567 --rc geninfo_unexecuted_blocks=1 00:09:54.567 00:09:54.567 ' 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:54.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.567 --rc genhtml_branch_coverage=1 00:09:54.567 --rc genhtml_function_coverage=1 00:09:54.567 --rc genhtml_legend=1 00:09:54.567 --rc geninfo_all_blocks=1 00:09:54.567 --rc geninfo_unexecuted_blocks=1 00:09:54.567 00:09:54.567 ' 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:54.567 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:54.568 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:57.856 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:57.856 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.856 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:57.856 Found net devices under 0000:84:00.0: cvl_0_0 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:57.857 Found net devices under 0000:84:00.1: cvl_0_1 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.857 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:09:57.857 00:09:57.857 --- 10.0.0.2 ping statistics --- 00:09:57.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.857 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:09:57.857 00:09:57.857 --- 10.0.0.1 ping statistics --- 00:09:57.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.857 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1114514 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1114514 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1114514 ']' 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.857 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.857 [2024-10-08 18:20:26.219639] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:09:57.857 [2024-10-08 18:20:26.219822] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.857 [2024-10-08 18:20:26.376757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.118 [2024-10-08 18:20:26.582144] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.118 [2024-10-08 18:20:26.582267] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.118 [2024-10-08 18:20:26.582303] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.118 [2024-10-08 18:20:26.582334] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.119 [2024-10-08 18:20:26.582359] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.119 [2024-10-08 18:20:26.583495] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.502 [2024-10-08 18:20:27.703903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.502 [2024-10-08 18:20:27.721086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.502 malloc0 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:59.502 { 00:09:59.502 "params": { 00:09:59.502 "name": "Nvme$subsystem", 00:09:59.502 "trtype": "$TEST_TRANSPORT", 00:09:59.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:59.502 "adrfam": "ipv4", 00:09:59.502 "trsvcid": "$NVMF_PORT", 00:09:59.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:59.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:59.502 "hdgst": ${hdgst:-false}, 00:09:59.502 "ddgst": ${ddgst:-false} 00:09:59.502 }, 00:09:59.502 "method": "bdev_nvme_attach_controller" 00:09:59.502 } 00:09:59.502 EOF 00:09:59.502 )") 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:59.502 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:59.502 "params": { 00:09:59.502 "name": "Nvme1", 00:09:59.502 "trtype": "tcp", 00:09:59.502 "traddr": "10.0.0.2", 00:09:59.502 "adrfam": "ipv4", 00:09:59.502 "trsvcid": "4420", 00:09:59.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:59.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:59.502 "hdgst": false, 00:09:59.502 "ddgst": false 00:09:59.502 }, 00:09:59.502 "method": "bdev_nvme_attach_controller" 00:09:59.502 }' 00:09:59.502 [2024-10-08 18:20:27.835290] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:09:59.502 [2024-10-08 18:20:27.835399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1114675 ] 00:09:59.502 [2024-10-08 18:20:27.935961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.763 [2024-10-08 18:20:28.142312] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.027 Running I/O for 10 seconds... 00:10:01.912 2610.00 IOPS, 20.39 MiB/s [2024-10-08T16:20:31.831Z] 2631.50 IOPS, 20.56 MiB/s [2024-10-08T16:20:32.773Z] 2686.00 IOPS, 20.98 MiB/s [2024-10-08T16:20:33.714Z] 2688.25 IOPS, 21.00 MiB/s [2024-10-08T16:20:34.658Z] 2700.40 IOPS, 21.10 MiB/s [2024-10-08T16:20:35.667Z] 2712.67 IOPS, 21.19 MiB/s [2024-10-08T16:20:36.607Z] 2674.14 IOPS, 20.89 MiB/s [2024-10-08T16:20:37.547Z] 2640.00 IOPS, 20.62 MiB/s [2024-10-08T16:20:38.488Z] 2714.67 IOPS, 21.21 MiB/s [2024-10-08T16:20:38.488Z] 2781.00 IOPS, 21.73 MiB/s 00:10:09.951 Latency(us) 00:10:09.951 [2024-10-08T16:20:38.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.951 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:09.951 Verification LBA range: start 0x0 length 0x1000 00:10:09.951 Nvme1n1 : 10.04 2781.02 21.73 0.00 0.00 45847.96 3907.89 68739.98 00:10:09.951 [2024-10-08T16:20:38.488Z] =================================================================================================================== 00:10:09.951 [2024-10-08T16:20:38.488Z] Total : 2781.02 21.73 0.00 0.00 45847.96 3907.89 68739.98 00:10:10.520 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1115990 00:10:10.520 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:10.520 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.520 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:10.520 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:10.520 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:10.520 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:10.520 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:10.520 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:10.520 { 00:10:10.520 "params": { 00:10:10.520 "name": "Nvme$subsystem", 00:10:10.520 "trtype": "$TEST_TRANSPORT", 00:10:10.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:10.520 "adrfam": "ipv4", 00:10:10.520 "trsvcid": "$NVMF_PORT", 00:10:10.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:10.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:10.520 "hdgst": ${hdgst:-false}, 00:10:10.520 "ddgst": ${ddgst:-false} 00:10:10.520 }, 00:10:10.520 "method": "bdev_nvme_attach_controller" 00:10:10.520 } 00:10:10.520 EOF 00:10:10.520 )") 00:10:10.520 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:10.520 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:10.520 [2024-10-08 18:20:38.895020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:38.895109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:10.521 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:10.521 "params": { 00:10:10.521 "name": "Nvme1", 00:10:10.521 "trtype": "tcp", 00:10:10.521 "traddr": "10.0.0.2", 00:10:10.521 "adrfam": "ipv4", 00:10:10.521 "trsvcid": "4420", 00:10:10.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:10.521 "hdgst": false, 00:10:10.521 "ddgst": false 00:10:10.521 }, 00:10:10.521 "method": "bdev_nvme_attach_controller" 00:10:10.521 }' 00:10:10.521 [2024-10-08 18:20:38.902976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:38.903039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:38.911024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:38.911098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:38.919067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:38.919125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:38.927012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:38.927043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:38.939051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:38.939082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:38.947071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:38.947102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:38.954551] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:10:10.521 [2024-10-08 18:20:38.954639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1115990 ] 00:10:10.521 [2024-10-08 18:20:38.955092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:38.955121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:38.963112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:38.963142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:38.971138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:38.971169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:38.979161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:38.979191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:38.987183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:38.987214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:38.995207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:38.995237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:39.003320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:39.003375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:39.011334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:39.011390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:39.019337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:39.019392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:39.027388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:39.027443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:39.035413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:39.035469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:39.043438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:39.043494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.521 [2024-10-08 18:20:39.047719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.521 [2024-10-08 18:20:39.051468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.521 [2024-10-08 18:20:39.051523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.059451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.059528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.067542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.067614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.075546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.075605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.083569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.083627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.091593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.091648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.099578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.099634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.107640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.107711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.115685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.115738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.123712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.123737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.131717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.131744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.139734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.139760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.147752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.147780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.155761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.155790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.163767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.163793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.171773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.171799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.179791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.179817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.187812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.187838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.195832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.195858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.203855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.203880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.211878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.211921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.219900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.219966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.225940] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.782 [2024-10-08 18:20:39.227919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.227971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.236006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.236061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.244070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.244130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.252102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.252162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.260017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.260044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.268150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.268208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.276176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.276233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.284150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.284208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.292202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.292260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.300141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.300169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.308251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.308308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.782 [2024-10-08 18:20:39.316222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.782 [2024-10-08 18:20:39.316262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.324340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.324413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.332375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.332440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.340277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.340307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.348407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.348485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.356425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.356480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.364445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.364500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.372471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.372526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.380527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.380598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.388550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.388614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.396578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.396640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.404597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.404695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.412617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.412691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.420642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.420711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.428681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.428726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.436709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.436734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.444727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.444755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.452740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.452768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.460755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.460784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.468758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.468784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.477312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.477382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.484793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.484822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 [2024-10-08 18:20:39.492799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.043 [2024-10-08 18:20:39.492825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.043 Running I/O for 5 seconds... 00:10:11.043 [2024-10-08 18:20:39.514000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.044 [2024-10-08 18:20:39.514082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.044 [2024-10-08 18:20:39.535737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.044 [2024-10-08 18:20:39.535769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.044 [2024-10-08 18:20:39.556910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.044 [2024-10-08 18:20:39.556987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.044 [2024-10-08 18:20:39.578854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.044 [2024-10-08 18:20:39.578897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.304 [2024-10-08 18:20:39.599912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.304 [2024-10-08 18:20:39.599998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.304 [2024-10-08 18:20:39.621785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.304 [2024-10-08 18:20:39.621818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.304 [2024-10-08 18:20:39.637763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.304 [2024-10-08 18:20:39.637796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.304 [2024-10-08 18:20:39.658179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.304 [2024-10-08 18:20:39.658250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.304 [2024-10-08 18:20:39.679222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.304 [2024-10-08 18:20:39.679301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.304 [2024-10-08 18:20:39.700184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.304 [2024-10-08 18:20:39.700254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.304 [2024-10-08 18:20:39.720768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.304 [2024-10-08 18:20:39.720805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.304 [2024-10-08 18:20:39.741880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.304 [2024-10-08 18:20:39.741949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.304 [2024-10-08 18:20:39.761190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.304 [2024-10-08 18:20:39.761263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.304 [2024-10-08 18:20:39.782511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.304 [2024-10-08 18:20:39.782581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.304 [2024-10-08 18:20:39.804828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.304 [2024-10-08 18:20:39.804859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.304 [2024-10-08 18:20:39.825890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.304 [2024-10-08 18:20:39.825971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.564 [2024-10-08 18:20:39.847602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.564 [2024-10-08 18:20:39.847705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.564 [2024-10-08 18:20:39.864022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.564 [2024-10-08 18:20:39.864098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.564 [2024-10-08 18:20:39.884925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.564 [2024-10-08 18:20:39.885011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.564 [2024-10-08 18:20:39.905966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.564 [2024-10-08 18:20:39.906037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.564 [2024-10-08 18:20:39.927392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.564 [2024-10-08 18:20:39.927464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.564 [2024-10-08 18:20:39.948818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.564 [2024-10-08 18:20:39.948850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.564 [2024-10-08 18:20:39.966991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.564 [2024-10-08 18:20:39.967072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.564 [2024-10-08 18:20:39.988778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.564 [2024-10-08 18:20:39.988810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.564 [2024-10-08 18:20:40.010198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.564 [2024-10-08 18:20:40.010282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.564 [2024-10-08 18:20:40.030912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.564 [2024-10-08 18:20:40.030995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.564 [2024-10-08 18:20:40.052757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.564 [2024-10-08 18:20:40.052791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.564 [2024-10-08 18:20:40.070499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.564 [2024-10-08 18:20:40.070572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.564 [2024-10-08 18:20:40.087860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.564 [2024-10-08 18:20:40.087892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.824 [2024-10-08 18:20:40.109705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.824 [2024-10-08 18:20:40.109737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.824 [2024-10-08 18:20:40.130813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.824 [2024-10-08 18:20:40.130845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.824 [2024-10-08 18:20:40.148886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.824 [2024-10-08 18:20:40.148956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.824 [2024-10-08 18:20:40.169992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.824 [2024-10-08 18:20:40.170062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.824 [2024-10-08 18:20:40.192104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.824 [2024-10-08 18:20:40.192175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.824 [2024-10-08 18:20:40.213869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.824 [2024-10-08 18:20:40.213901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.824 [2024-10-08 18:20:40.235326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.824 [2024-10-08 18:20:40.235398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.824 [2024-10-08 18:20:40.252842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.824 [2024-10-08 18:20:40.252874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.824 [2024-10-08 18:20:40.274350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.824 [2024-10-08 18:20:40.274423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.824 [2024-10-08 18:20:40.292591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.824 [2024-10-08 18:20:40.292701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.824 [2024-10-08 18:20:40.314130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.824 [2024-10-08 18:20:40.314200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.824 [2024-10-08 18:20:40.335495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.824 [2024-10-08 18:20:40.335565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.824 [2024-10-08 18:20:40.356334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.824 [2024-10-08 18:20:40.356439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.087 [2024-10-08 18:20:40.377279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.087 [2024-10-08 18:20:40.377353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.087 [2024-10-08 18:20:40.398505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.087 [2024-10-08 18:20:40.398575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.087 [2024-10-08 18:20:40.414844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.087 [2024-10-08 18:20:40.414875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.087 [2024-10-08 18:20:40.432254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.087 [2024-10-08 18:20:40.432325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.087 [2024-10-08 18:20:40.452745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.087 [2024-10-08 18:20:40.452776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.087 [2024-10-08 18:20:40.472745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.087 [2024-10-08 18:20:40.472776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.087 [2024-10-08 18:20:40.493089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.087 [2024-10-08 18:20:40.493161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.087 5967.00 IOPS, 46.62 MiB/s [2024-10-08T16:20:40.624Z] [2024-10-08 18:20:40.513884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.087 [2024-10-08 18:20:40.513916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.087 [2024-10-08 18:20:40.533740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.087 [2024-10-08 18:20:40.533773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.087 [2024-10-08 18:20:40.554600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.087 [2024-10-08 18:20:40.554698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.087 [2024-10-08 18:20:40.569758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.087 [2024-10-08 18:20:40.569790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.087 [2024-10-08 18:20:40.589885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.087 [2024-10-08 18:20:40.589950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.087 [2024-10-08 18:20:40.610408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.087 [2024-10-08 18:20:40.610479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.347 [2024-10-08 18:20:40.631898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.347 [2024-10-08 18:20:40.631984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.347 [2024-10-08 18:20:40.652769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.347 [2024-10-08 18:20:40.652809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.347 [2024-10-08 18:20:40.673908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.347 [2024-10-08 18:20:40.673993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.347 [2024-10-08 18:20:40.690027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.347 [2024-10-08 18:20:40.690096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.347 [2024-10-08 18:20:40.712019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.347 [2024-10-08 18:20:40.712088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.347 [2024-10-08 18:20:40.734780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.347 [2024-10-08 18:20:40.734815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.347 [2024-10-08 18:20:40.751876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.347 [2024-10-08 18:20:40.751908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.347 [2024-10-08 18:20:40.772325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.347 [2024-10-08 18:20:40.772394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.347 [2024-10-08 18:20:40.793350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.347 [2024-10-08 18:20:40.793419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.347 [2024-10-08 18:20:40.812177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.347 [2024-10-08 18:20:40.812246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.347 [2024-10-08 18:20:40.833391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.347 [2024-10-08 18:20:40.833462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.347 [2024-10-08 18:20:40.853474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.347 [2024-10-08 18:20:40.853552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.347 [2024-10-08 18:20:40.874952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.347 [2024-10-08 18:20:40.875022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.610 [2024-10-08 18:20:40.896012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.610 [2024-10-08 18:20:40.896083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.610 [2024-10-08 18:20:40.917429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.610 [2024-10-08 18:20:40.917500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.610 [2024-10-08 18:20:40.937750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.610 [2024-10-08 18:20:40.937782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.610 [2024-10-08 18:20:40.958782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.610 [2024-10-08 18:20:40.958813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.610 [2024-10-08 18:20:40.980206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.610 [2024-10-08 18:20:40.980297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.610 [2024-10-08 18:20:41.001920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.610 [2024-10-08 18:20:41.001992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.610 [2024-10-08 18:20:41.023805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.610 [2024-10-08 18:20:41.023836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.610 [2024-10-08 18:20:41.044535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.610 [2024-10-08 18:20:41.044619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.610 [2024-10-08 18:20:41.064183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.610 [2024-10-08 18:20:41.064253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.610 [2024-10-08 18:20:41.085861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.610 [2024-10-08 18:20:41.085892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.610 [2024-10-08 18:20:41.108015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.610 [2024-10-08 18:20:41.108085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.610 [2024-10-08 18:20:41.128842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.610 [2024-10-08 18:20:41.128874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.148638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.148691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.161056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.161087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.180487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.180558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.203051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.203127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.224882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.224953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.245065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.245141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.266456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.266526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.287835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.287873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.305511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.305587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.327951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.328022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.349034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.349109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.370068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.370137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.391035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.391103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.410493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.410524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.926 [2024-10-08 18:20:41.425751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.926 [2024-10-08 18:20:41.425791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.442144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.442175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.454750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.454781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.471541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.471572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.489494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.489563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 6118.50 IOPS, 47.80 MiB/s [2024-10-08T16:20:41.756Z] [2024-10-08 18:20:41.508785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.508816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.526034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.526104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.543080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.543151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.561451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.561521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.581892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.581966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.603711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.603744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.621628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.621714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.642701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.642731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.664102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.664172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.685417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.685487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.703376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.703456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.724742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.724773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.219 [2024-10-08 18:20:41.745890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.219 [2024-10-08 18:20:41.745922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.479 [2024-10-08 18:20:41.762097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.479 [2024-10-08 18:20:41.762181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.479 [2024-10-08 18:20:41.783898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.479 [2024-10-08 18:20:41.783968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.479 [2024-10-08 18:20:41.805491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.479 [2024-10-08 18:20:41.805563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.479 [2024-10-08 18:20:41.825003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.479 [2024-10-08 18:20:41.825088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.479 [2024-10-08 18:20:41.846972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.479 [2024-10-08 18:20:41.847049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.479 [2024-10-08 18:20:41.867216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.479 [2024-10-08 18:20:41.867288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.479 [2024-10-08 18:20:41.887733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.479 [2024-10-08 18:20:41.887766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.479 [2024-10-08 18:20:41.908364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.479 [2024-10-08 18:20:41.908434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.479 [2024-10-08 18:20:41.929446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.479 [2024-10-08 18:20:41.929516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.479 [2024-10-08 18:20:41.950254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.479 [2024-10-08 18:20:41.950325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.479 [2024-10-08 18:20:41.969148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.479 [2024-10-08 18:20:41.969218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.479 [2024-10-08 18:20:41.990727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.479 [2024-10-08 18:20:41.990758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.479 [2024-10-08 18:20:42.008532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.479 [2024-10-08 18:20:42.008601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.739 [2024-10-08 18:20:42.030348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.739 [2024-10-08 18:20:42.030420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.739 [2024-10-08 18:20:42.051813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.739 [2024-10-08 18:20:42.051845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.739 [2024-10-08 18:20:42.072714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.739 [2024-10-08 18:20:42.072745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.739 [2024-10-08 18:20:42.089239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.739 [2024-10-08 18:20:42.089308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.739 [2024-10-08 18:20:42.109570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.739 [2024-10-08 18:20:42.109639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.739 [2024-10-08 18:20:42.129818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.739 [2024-10-08 18:20:42.129849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.739 [2024-10-08 18:20:42.150709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.739 [2024-10-08 18:20:42.150740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.739 [2024-10-08 18:20:42.172833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.739 [2024-10-08 18:20:42.172865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.739 [2024-10-08 18:20:42.195143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.739 [2024-10-08 18:20:42.195212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.739 [2024-10-08 18:20:42.216268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.739 [2024-10-08 18:20:42.216337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.739 [2024-10-08 18:20:42.236835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.739 [2024-10-08 18:20:42.236866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.739 [2024-10-08 18:20:42.256822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.739 [2024-10-08 18:20:42.256861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.998 [2024-10-08 18:20:42.278834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.998 [2024-10-08 18:20:42.278874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.998 [2024-10-08 18:20:42.300883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.998 [2024-10-08 18:20:42.300914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.998 [2024-10-08 18:20:42.321879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.998 [2024-10-08 18:20:42.321953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.998 [2024-10-08 18:20:42.342699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.998 [2024-10-08 18:20:42.342730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.998 [2024-10-08 18:20:42.359870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.998 [2024-10-08 18:20:42.359912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.999 [2024-10-08 18:20:42.380985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.999 [2024-10-08 18:20:42.381057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.999 [2024-10-08 18:20:42.402275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.999 [2024-10-08 18:20:42.402346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.999 [2024-10-08 18:20:42.424092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.999 [2024-10-08 18:20:42.424163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.999 [2024-10-08 18:20:42.445830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.999 [2024-10-08 18:20:42.445861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.999 [2024-10-08 18:20:42.462291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.999 [2024-10-08 18:20:42.462362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.999 [2024-10-08 18:20:42.482180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.999 [2024-10-08 18:20:42.482250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.999 [2024-10-08 18:20:42.503488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.999 [2024-10-08 18:20:42.503558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.999 6076.33 IOPS, 47.47 MiB/s [2024-10-08T16:20:42.536Z] [2024-10-08 18:20:42.524717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.999 [2024-10-08 18:20:42.524748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.259 [2024-10-08 18:20:42.546227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.259 [2024-10-08 18:20:42.546316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.259 [2024-10-08 18:20:42.567779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.259 [2024-10-08 18:20:42.567810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.259 [2024-10-08 18:20:42.589487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.259 [2024-10-08 18:20:42.589557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.259 [2024-10-08 18:20:42.610301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.259 [2024-10-08 18:20:42.610371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.259 [2024-10-08 18:20:42.630753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.259 [2024-10-08 18:20:42.630785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.259 [2024-10-08 18:20:42.652301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.259 [2024-10-08 18:20:42.652372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.259 [2024-10-08 18:20:42.673805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.259 [2024-10-08 18:20:42.673836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.259 [2024-10-08 18:20:42.691553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.259 [2024-10-08 18:20:42.691623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.259 [2024-10-08 18:20:42.712633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.259 [2024-10-08 18:20:42.712720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.259 [2024-10-08 18:20:42.733621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.259 [2024-10-08 18:20:42.733709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.259 [2024-10-08 18:20:42.754837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.259 [2024-10-08 18:20:42.754869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.259 [2024-10-08 18:20:42.776161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.259 [2024-10-08 18:20:42.776231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.520 [2024-10-08 18:20:42.798402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.520 [2024-10-08 18:20:42.798474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.520 [2024-10-08 18:20:42.819383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.520 [2024-10-08 18:20:42.819458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.520 [2024-10-08 18:20:42.840832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.520 [2024-10-08 18:20:42.840864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.520 [2024-10-08 18:20:42.862366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.520 [2024-10-08 18:20:42.862447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.520 [2024-10-08 18:20:42.883401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.520 [2024-10-08 18:20:42.883478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.520 [2024-10-08 18:20:42.901283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.520 [2024-10-08 18:20:42.901352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.520 [2024-10-08 18:20:42.922887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.520 [2024-10-08 18:20:42.922920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.520 [2024-10-08 18:20:42.938833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.520 [2024-10-08 18:20:42.938874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.520 [2024-10-08 18:20:42.957876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.520 [2024-10-08 18:20:42.957908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.520 [2024-10-08 18:20:42.978436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.520 [2024-10-08 18:20:42.978507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.520 [2024-10-08 18:20:42.999018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.520 [2024-10-08 18:20:42.999087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.520 [2024-10-08 18:20:43.020955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.520 [2024-10-08 18:20:43.021025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.520 [2024-10-08 18:20:43.042905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.520 [2024-10-08 18:20:43.043000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.781 [2024-10-08 18:20:43.064719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.781 [2024-10-08 18:20:43.064751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.781 [2024-10-08 18:20:43.086183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.781 [2024-10-08 18:20:43.086254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.781 [2024-10-08 18:20:43.107858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.781 [2024-10-08 18:20:43.107889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.781 [2024-10-08 18:20:43.129818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.781 [2024-10-08 18:20:43.129848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.781 [2024-10-08 18:20:43.152701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.781 [2024-10-08 18:20:43.152733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.781 [2024-10-08 18:20:43.170756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.781 [2024-10-08 18:20:43.170787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.781 [2024-10-08 18:20:43.192928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.781 [2024-10-08 18:20:43.192997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.781 [2024-10-08 18:20:43.214738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.781 [2024-10-08 18:20:43.214777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.781 [2024-10-08 18:20:43.235708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.781 [2024-10-08 18:20:43.235739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.781 [2024-10-08 18:20:43.256327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.781 [2024-10-08 18:20:43.256397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.781 [2024-10-08 18:20:43.278033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.781 [2024-10-08 18:20:43.278104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.781 [2024-10-08 18:20:43.295866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.781 [2024-10-08 18:20:43.295896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.781 [2024-10-08 18:20:43.317294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.781 [2024-10-08 18:20:43.317366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.041 [2024-10-08 18:20:43.338775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.041 [2024-10-08 18:20:43.338813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.041 [2024-10-08 18:20:43.361246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.041 [2024-10-08 18:20:43.361316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.041 [2024-10-08 18:20:43.382518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.041 [2024-10-08 18:20:43.382588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.041 [2024-10-08 18:20:43.404193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.041 [2024-10-08 18:20:43.404263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.041 [2024-10-08 18:20:43.421459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.041 [2024-10-08 18:20:43.421528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.041 [2024-10-08 18:20:43.441592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.041 [2024-10-08 18:20:43.441700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.041 [2024-10-08 18:20:43.462518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.041 [2024-10-08 18:20:43.462588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.041 [2024-10-08 18:20:43.483234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.041 [2024-10-08 18:20:43.483305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.041 [2024-10-08 18:20:43.504774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.041 [2024-10-08 18:20:43.504806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.041 6054.50 IOPS, 47.30 MiB/s [2024-10-08T16:20:43.578Z] [2024-10-08 18:20:43.525899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.041 [2024-10-08 18:20:43.525982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.041 [2024-10-08 18:20:43.546992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.041 [2024-10-08 18:20:43.547062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.041 [2024-10-08 18:20:43.567795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.041 [2024-10-08 18:20:43.567826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.302 [2024-10-08 18:20:43.588536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.302 [2024-10-08 18:20:43.588606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.302 [2024-10-08 18:20:43.610308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.302 [2024-10-08 18:20:43.610382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.302 [2024-10-08 18:20:43.628791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.302 [2024-10-08 18:20:43.628831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.302 [2024-10-08 18:20:43.650746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.302 [2024-10-08 18:20:43.650777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.302 [2024-10-08 18:20:43.672056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.302 [2024-10-08 18:20:43.672127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.302 [2024-10-08 18:20:43.693378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.302 [2024-10-08 18:20:43.693447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.302 [2024-10-08 18:20:43.710701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.302 [2024-10-08 18:20:43.710732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.302 [2024-10-08 18:20:43.731209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.302 [2024-10-08 18:20:43.731281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.302 [2024-10-08 18:20:43.752667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.302 [2024-10-08 18:20:43.752722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.302 [2024-10-08 18:20:43.768746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.302 [2024-10-08 18:20:43.768777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.302 [2024-10-08 18:20:43.789718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.302 [2024-10-08 18:20:43.789750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.302 [2024-10-08 18:20:43.812157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.302 [2024-10-08 18:20:43.812227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.302 [2024-10-08 18:20:43.834549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.302 [2024-10-08 18:20:43.834619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.562 [2024-10-08 18:20:43.856973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.562 [2024-10-08 18:20:43.857047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.562 [2024-10-08 18:20:43.879120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.562 [2024-10-08 18:20:43.879190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.562 [2024-10-08 18:20:43.899960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.562 [2024-10-08 18:20:43.900030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.562 [2024-10-08 18:20:43.921229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.562 [2024-10-08 18:20:43.921299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.562 [2024-10-08 18:20:43.942794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.562 [2024-10-08 18:20:43.942832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.562 [2024-10-08 18:20:43.959914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.562 [2024-10-08 18:20:43.959998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.562 [2024-10-08 18:20:43.977862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.562 [2024-10-08 18:20:43.977893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.562 [2024-10-08 18:20:43.999845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.562 [2024-10-08 18:20:43.999876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.562 [2024-10-08 18:20:44.021473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.562 [2024-10-08 18:20:44.021541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.562 [2024-10-08 18:20:44.042702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.562 [2024-10-08 18:20:44.042732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.562 [2024-10-08 18:20:44.062882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.562 [2024-10-08 18:20:44.062913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.562 [2024-10-08 18:20:44.084806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.562 [2024-10-08 18:20:44.084838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.823 [2024-10-08 18:20:44.106873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.823 [2024-10-08 18:20:44.106906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.823 [2024-10-08 18:20:44.127851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.823 [2024-10-08 18:20:44.127883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.823 [2024-10-08 18:20:44.149469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.823 [2024-10-08 18:20:44.149539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.823 [2024-10-08 18:20:44.169874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.823 [2024-10-08 18:20:44.169939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.823 [2024-10-08 18:20:44.191085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.823 [2024-10-08 18:20:44.191155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.823 [2024-10-08 18:20:44.211589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.823 [2024-10-08 18:20:44.211677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.823 [2024-10-08 18:20:44.231862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.823 [2024-10-08 18:20:44.231895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.823 [2024-10-08 18:20:44.252474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.823 [2024-10-08 18:20:44.252543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.823 [2024-10-08 18:20:44.273496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.823 [2024-10-08 18:20:44.273564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.823 [2024-10-08 18:20:44.295338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.823 [2024-10-08 18:20:44.295416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.823 [2024-10-08 18:20:44.317761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.823 [2024-10-08 18:20:44.317792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.823 [2024-10-08 18:20:44.339173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.823 [2024-10-08 18:20:44.339252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.360875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.360940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.381868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.381916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.403778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.403810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.426245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.426314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.446722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.446753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.467797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.467839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.489102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.489171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.509833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.509866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 6034.80 IOPS, 47.15 MiB/s [2024-10-08T16:20:44.619Z] [2024-10-08 18:20:44.525138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.525206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 00:10:16.082 Latency(us) 00:10:16.082 [2024-10-08T16:20:44.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.082 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:16.082 Nvme1n1 : 5.02 6036.99 47.16 0.00 0.00 21160.17 5946.79 34952.53 00:10:16.082 [2024-10-08T16:20:44.619Z] =================================================================================================================== 00:10:16.082 [2024-10-08T16:20:44.619Z] Total : 6036.99 47.16 0.00 0.00 21160.17 5946.79 34952.53 00:10:16.082 [2024-10-08 18:20:44.531831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.531859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.539928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.539993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.547962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.548024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.555933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.555959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.563984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.564023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.572003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.572042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.082 [2024-10-08 18:20:44.580030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.082 [2024-10-08 18:20:44.580072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.083 [2024-10-08 18:20:44.588068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.083 [2024-10-08 18:20:44.588111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.083 [2024-10-08 18:20:44.596080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.083 [2024-10-08 18:20:44.596145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.083 [2024-10-08 18:20:44.604104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.083 [2024-10-08 18:20:44.604149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.083 [2024-10-08 18:20:44.612128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.083 [2024-10-08 18:20:44.612174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.620166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.620217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.628206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.628264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.636195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.636242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.644234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.644292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.652260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.652308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.660284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.660333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.668304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.668351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.676321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.676367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.684343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.684389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.692367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.692414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.700466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.700524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.708480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.708534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.716507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.716563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.724544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.724599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.732557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.732612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.740582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.740636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.748501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.748525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.756592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.756638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.764713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.764752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.772731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.772768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.780721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.780746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.788735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.788780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.796749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.796784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.804755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.804781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.812757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.812782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.820773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.820798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.828871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.828937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.836827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.836867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.844848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.844888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.852861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.852894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.860864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.860889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.868885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.868926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.342 [2024-10-08 18:20:44.876981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.342 [2024-10-08 18:20:44.877051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.603 [2024-10-08 18:20:44.884942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.603 [2024-10-08 18:20:44.885012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1115990) - No such process 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1115990 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.603 delay0 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.603 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:16.603 [2024-10-08 18:20:45.056753] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:23.185 Initializing NVMe Controllers 00:10:23.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:23.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:23.185 Initialization complete. Launching workers. 00:10:23.185 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 159 00:10:23.185 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 446, failed to submit 33 00:10:23.185 success 283, unsuccessful 163, failed 0 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:23.185 rmmod nvme_tcp 00:10:23.185 rmmod nvme_fabrics 00:10:23.185 rmmod nvme_keyring 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1114514 ']' 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1114514 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1114514 ']' 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1114514 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1114514 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1114514' 00:10:23.185 killing process with pid 1114514 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1114514 00:10:23.185 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1114514 00:10:23.445 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:23.445 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:23.445 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:23.445 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:23.445 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:10:23.445 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:23.445 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:10:23.445 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:23.445 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:23.445 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.445 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.445 18:20:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.355 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:25.355 00:10:25.355 real 0m31.256s 00:10:25.355 user 0m44.158s 00:10:25.355 sys 0m10.038s 00:10:25.355 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.355 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.355 ************************************ 00:10:25.355 END TEST nvmf_zcopy 00:10:25.355 ************************************ 00:10:25.355 18:20:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:25.355 18:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:25.355 18:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.355 18:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:25.615 ************************************ 00:10:25.615 START TEST nvmf_nmic 00:10:25.615 ************************************ 00:10:25.615 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:25.615 * Looking for test storage... 00:10:25.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.615 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:25.615 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:25.615 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:25.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.875 --rc genhtml_branch_coverage=1 00:10:25.875 --rc genhtml_function_coverage=1 00:10:25.875 --rc genhtml_legend=1 00:10:25.875 --rc geninfo_all_blocks=1 00:10:25.875 --rc geninfo_unexecuted_blocks=1 00:10:25.875 00:10:25.875 ' 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:25.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.875 --rc genhtml_branch_coverage=1 00:10:25.875 --rc genhtml_function_coverage=1 00:10:25.875 --rc genhtml_legend=1 00:10:25.875 --rc geninfo_all_blocks=1 00:10:25.875 --rc geninfo_unexecuted_blocks=1 00:10:25.875 00:10:25.875 ' 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:25.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.875 --rc genhtml_branch_coverage=1 00:10:25.875 --rc genhtml_function_coverage=1 00:10:25.875 --rc genhtml_legend=1 00:10:25.875 --rc geninfo_all_blocks=1 00:10:25.875 --rc geninfo_unexecuted_blocks=1 00:10:25.875 00:10:25.875 ' 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:25.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.875 --rc genhtml_branch_coverage=1 00:10:25.875 --rc genhtml_function_coverage=1 00:10:25.875 --rc genhtml_legend=1 00:10:25.875 --rc geninfo_all_blocks=1 00:10:25.875 --rc geninfo_unexecuted_blocks=1 00:10:25.875 00:10:25.875 ' 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.875 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:25.876 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:29.166 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:29.167 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:29.167 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:29.167 Found net devices under 0000:84:00.0: cvl_0_0 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:29.167 Found net devices under 0000:84:00.1: cvl_0_1 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:10:29.167 00:10:29.167 --- 10.0.0.2 ping statistics --- 00:10:29.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.167 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:10:29.167 00:10:29.167 --- 10.0.0.1 ping statistics --- 00:10:29.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.167 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1119530 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1119530 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1119530 ']' 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.167 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.167 [2024-10-08 18:20:57.325496] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:10:29.167 [2024-10-08 18:20:57.325683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.167 [2024-10-08 18:20:57.489153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.427 [2024-10-08 18:20:57.722258] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.427 [2024-10-08 18:20:57.722361] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.427 [2024-10-08 18:20:57.722397] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.427 [2024-10-08 18:20:57.722426] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.427 [2024-10-08 18:20:57.722454] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.427 [2024-10-08 18:20:57.726138] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.427 [2024-10-08 18:20:57.726246] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.427 [2024-10-08 18:20:57.726337] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.427 [2024-10-08 18:20:57.726340] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.427 [2024-10-08 18:20:57.914361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.427 Malloc0 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.427 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.685 [2024-10-08 18:20:57.967328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.685 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.685 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:29.686 test case1: single bdev can't be used in multiple subsystems 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.686 [2024-10-08 18:20:57.991132] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:29.686 [2024-10-08 18:20:57.991168] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:29.686 [2024-10-08 18:20:57.991184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.686 request: 00:10:29.686 { 00:10:29.686 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:29.686 "namespace": { 00:10:29.686 "bdev_name": "Malloc0", 00:10:29.686 "no_auto_visible": false 00:10:29.686 }, 00:10:29.686 "method": "nvmf_subsystem_add_ns", 00:10:29.686 "req_id": 1 00:10:29.686 } 00:10:29.686 Got JSON-RPC error response 00:10:29.686 response: 00:10:29.686 { 00:10:29.686 "code": -32602, 00:10:29.686 "message": "Invalid parameters" 00:10:29.686 } 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:29.686 Adding namespace failed - expected result. 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:29.686 test case2: host connect to nvmf target in multiple paths 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.686 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:29.686 [2024-10-08 18:20:57.999253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:29.686 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.686 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:30.251 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:30.818 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:30.818 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:30.818 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:30.818 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:30.818 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:33.343 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:33.343 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:33.343 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.343 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:33.343 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.343 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:33.343 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:33.343 [global] 00:10:33.343 thread=1 00:10:33.343 invalidate=1 00:10:33.343 rw=write 00:10:33.343 time_based=1 00:10:33.343 runtime=1 00:10:33.343 ioengine=libaio 00:10:33.343 direct=1 00:10:33.343 bs=4096 00:10:33.343 iodepth=1 00:10:33.343 norandommap=0 00:10:33.343 numjobs=1 00:10:33.343 00:10:33.343 verify_dump=1 00:10:33.343 verify_backlog=512 00:10:33.343 verify_state_save=0 00:10:33.343 do_verify=1 00:10:33.343 verify=crc32c-intel 00:10:33.343 [job0] 00:10:33.343 filename=/dev/nvme0n1 00:10:33.343 Could not set queue depth (nvme0n1) 00:10:33.343 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.343 fio-3.35 00:10:33.343 Starting 1 thread 00:10:34.276 00:10:34.276 job0: (groupid=0, jobs=1): err= 0: pid=1120122: Tue Oct 8 18:21:02 2024 00:10:34.276 read: IOPS=2157, BW=8631KiB/s (8839kB/s)(8640KiB/1001msec) 00:10:34.276 slat (nsec): min=5241, max=33247, avg=7704.41, stdev=3486.55 00:10:34.276 clat (usec): min=174, max=41091, avg=260.62, stdev=1242.23 00:10:34.276 lat (usec): min=183, max=41098, avg=268.32, stdev=1242.44 00:10:34.276 clat percentiles (usec): 00:10:34.276 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:10:34.276 | 30.00th=[ 208], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 221], 00:10:34.276 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 265], 95.00th=[ 281], 00:10:34.276 | 99.00th=[ 322], 99.50th=[ 347], 99.90th=[ 529], 99.95th=[41157], 00:10:34.276 | 99.99th=[41157] 00:10:34.276 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:34.276 slat (nsec): min=5554, max=36145, avg=8252.99, stdev=2416.01 00:10:34.276 clat (usec): min=124, max=503, avg=151.69, stdev=15.10 00:10:34.276 lat (usec): min=130, max=510, avg=159.94, stdev=15.51 00:10:34.276 clat percentiles (usec): 00:10:34.276 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:10:34.276 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:10:34.276 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 176], 00:10:34.276 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 239], 99.95th=[ 239], 00:10:34.276 | 99.99th=[ 502] 00:10:34.276 bw ( KiB/s): min= 8984, max= 8984, per=87.82%, avg=8984.00, stdev= 0.00, samples=1 00:10:34.276 iops : min= 2246, max= 2246, avg=2246.00, stdev= 0.00, samples=1 00:10:34.276 lat (usec) : 250=94.58%, 500=5.32%, 750=0.06% 00:10:34.276 lat (msec) : 50=0.04% 00:10:34.276 cpu : usr=2.50%, sys=3.40%, ctx=4720, majf=0, minf=1 00:10:34.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.276 issued rwts: total=2160,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.276 00:10:34.276 Run status group 0 (all jobs): 00:10:34.276 READ: bw=8631KiB/s (8839kB/s), 8631KiB/s-8631KiB/s (8839kB/s-8839kB/s), io=8640KiB (8847kB), run=1001-1001msec 00:10:34.276 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:34.276 00:10:34.276 Disk stats (read/write): 00:10:34.276 nvme0n1: ios=2098/2108, merge=0/0, ticks=562/317, in_queue=879, util=91.98% 00:10:34.276 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.535 rmmod nvme_tcp 00:10:34.535 rmmod nvme_fabrics 00:10:34.535 rmmod nvme_keyring 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1119530 ']' 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1119530 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1119530 ']' 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1119530 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1119530 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1119530' 00:10:34.535 killing process with pid 1119530 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1119530 00:10:34.535 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1119530 00:10:35.105 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:35.105 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:35.105 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:35.105 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:35.105 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:10:35.105 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:35.105 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:10:35.105 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.105 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.105 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.105 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.105 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.014 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.014 00:10:37.014 real 0m11.514s 00:10:37.014 user 0m23.333s 00:10:37.014 sys 0m3.491s 00:10:37.014 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.014 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.014 ************************************ 00:10:37.014 END TEST nvmf_nmic 00:10:37.014 ************************************ 00:10:37.014 18:21:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:37.014 18:21:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:37.014 18:21:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.014 18:21:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.014 ************************************ 00:10:37.014 START TEST nvmf_fio_target 00:10:37.014 ************************************ 00:10:37.014 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:37.274 * Looking for test storage... 00:10:37.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:37.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.274 --rc genhtml_branch_coverage=1 00:10:37.274 --rc genhtml_function_coverage=1 00:10:37.274 --rc genhtml_legend=1 00:10:37.274 --rc geninfo_all_blocks=1 00:10:37.274 --rc geninfo_unexecuted_blocks=1 00:10:37.274 00:10:37.274 ' 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:37.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.274 --rc genhtml_branch_coverage=1 00:10:37.274 --rc genhtml_function_coverage=1 00:10:37.274 --rc genhtml_legend=1 00:10:37.274 --rc geninfo_all_blocks=1 00:10:37.274 --rc geninfo_unexecuted_blocks=1 00:10:37.274 00:10:37.274 ' 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:37.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.274 --rc genhtml_branch_coverage=1 00:10:37.274 --rc genhtml_function_coverage=1 00:10:37.274 --rc genhtml_legend=1 00:10:37.274 --rc geninfo_all_blocks=1 00:10:37.274 --rc geninfo_unexecuted_blocks=1 00:10:37.274 00:10:37.274 ' 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:37.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.274 --rc genhtml_branch_coverage=1 00:10:37.274 --rc genhtml_function_coverage=1 00:10:37.274 --rc genhtml_legend=1 00:10:37.274 --rc geninfo_all_blocks=1 00:10:37.274 --rc geninfo_unexecuted_blocks=1 00:10:37.274 00:10:37.274 ' 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.274 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.275 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.534 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:37.534 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:37.534 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.534 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:40.826 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:40.826 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:40.826 Found net devices under 0000:84:00.0: cvl_0_0 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:40.826 Found net devices under 0000:84:00.1: cvl_0_1 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.826 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:10:40.827 00:10:40.827 --- 10.0.0.2 ping statistics --- 00:10:40.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.827 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:10:40.827 00:10:40.827 --- 10.0.0.1 ping statistics --- 00:10:40.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.827 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1122838 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1122838 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1122838 ']' 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.827 18:21:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.827 [2024-10-08 18:21:09.053238] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:10:40.827 [2024-10-08 18:21:09.053425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.827 [2024-10-08 18:21:09.202096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.086 [2024-10-08 18:21:09.399127] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.086 [2024-10-08 18:21:09.399233] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.086 [2024-10-08 18:21:09.399270] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.086 [2024-10-08 18:21:09.399301] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.086 [2024-10-08 18:21:09.399328] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.086 [2024-10-08 18:21:09.403065] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.086 [2024-10-08 18:21:09.403167] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.086 [2024-10-08 18:21:09.403250] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.086 [2024-10-08 18:21:09.403254] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.086 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.086 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:41.086 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:41.086 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.086 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.086 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.086 18:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:41.650 [2024-10-08 18:21:10.065036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.650 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:41.908 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:41.908 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.473 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:42.473 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.038 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:43.038 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.295 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:43.295 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:43.552 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.118 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:44.118 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.684 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:44.684 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:45.249 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:45.249 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:45.506 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.070 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:46.070 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.635 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:46.635 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.199 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.457 [2024-10-08 18:21:15.740270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.457 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:48.022 18:21:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:48.587 18:21:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.152 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:49.152 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:49.152 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.152 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:49.152 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:49.152 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:51.738 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:51.738 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:51.738 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.738 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:51.738 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.738 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:51.738 18:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:51.738 [global] 00:10:51.738 thread=1 00:10:51.738 invalidate=1 00:10:51.738 rw=write 00:10:51.738 time_based=1 00:10:51.738 runtime=1 00:10:51.738 ioengine=libaio 00:10:51.738 direct=1 00:10:51.738 bs=4096 00:10:51.738 iodepth=1 00:10:51.738 norandommap=0 00:10:51.738 numjobs=1 00:10:51.738 00:10:51.738 verify_dump=1 00:10:51.738 verify_backlog=512 00:10:51.738 verify_state_save=0 00:10:51.738 do_verify=1 00:10:51.738 verify=crc32c-intel 00:10:51.738 [job0] 00:10:51.738 filename=/dev/nvme0n1 00:10:51.738 [job1] 00:10:51.738 filename=/dev/nvme0n2 00:10:51.738 [job2] 00:10:51.738 filename=/dev/nvme0n3 00:10:51.738 [job3] 00:10:51.738 filename=/dev/nvme0n4 00:10:51.738 Could not set queue depth (nvme0n1) 00:10:51.738 Could not set queue depth (nvme0n2) 00:10:51.738 Could not set queue depth (nvme0n3) 00:10:51.738 Could not set queue depth (nvme0n4) 00:10:51.738 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.738 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.738 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.738 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.738 fio-3.35 00:10:51.738 Starting 4 threads 00:10:52.671 00:10:52.671 job0: (groupid=0, jobs=1): err= 0: pid=1124363: Tue Oct 8 18:21:21 2024 00:10:52.671 read: IOPS=37, BW=152KiB/s (155kB/s)(152KiB/1002msec) 00:10:52.671 slat (nsec): min=5856, max=39070, avg=15067.03, stdev=7000.69 00:10:52.671 clat (usec): min=209, max=41316, avg=23646.95, stdev=20186.63 00:10:52.671 lat (usec): min=218, max=41323, avg=23662.02, stdev=20189.43 00:10:52.671 clat percentiles (usec): 00:10:52.671 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 235], 20.00th=[ 330], 00:10:52.671 | 30.00th=[ 379], 40.00th=[ 529], 50.00th=[40633], 60.00th=[41157], 00:10:52.671 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:52.671 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:52.671 | 99.99th=[41157] 00:10:52.671 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:52.671 slat (nsec): min=6597, max=29114, avg=8442.37, stdev=2800.91 00:10:52.671 clat (usec): min=142, max=2037, avg=185.40, stdev=89.61 00:10:52.671 lat (usec): min=151, max=2049, avg=193.85, stdev=90.05 00:10:52.671 clat percentiles (usec): 00:10:52.671 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 155], 00:10:52.671 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:10:52.671 | 70.00th=[ 180], 80.00th=[ 219], 90.00th=[ 243], 95.00th=[ 258], 00:10:52.671 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 2040], 99.95th=[ 2040], 00:10:52.671 | 99.99th=[ 2040] 00:10:52.671 bw ( KiB/s): min= 4096, max= 4096, per=34.17%, avg=4096.00, stdev= 0.00, samples=1 00:10:52.671 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:52.671 lat (usec) : 250=88.55%, 500=6.91%, 750=0.36% 00:10:52.671 lat (msec) : 4=0.18%, 50=4.00% 00:10:52.671 cpu : usr=0.10%, sys=0.50%, ctx=551, majf=0, minf=1 00:10:52.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.671 issued rwts: total=38,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.671 job1: (groupid=0, jobs=1): err= 0: pid=1124364: Tue Oct 8 18:21:21 2024 00:10:52.671 read: IOPS=1307, BW=5231KiB/s (5356kB/s)(5236KiB/1001msec) 00:10:52.671 slat (nsec): min=7122, max=52958, avg=10989.01, stdev=6236.30 00:10:52.671 clat (usec): min=182, max=41131, avg=513.66, stdev=3369.43 00:10:52.671 lat (usec): min=191, max=41144, avg=524.65, stdev=3370.36 00:10:52.671 clat percentiles (usec): 00:10:52.671 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 210], 00:10:52.671 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:10:52.671 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 277], 00:10:52.671 | 99.00th=[ 318], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:52.671 | 99.99th=[41157] 00:10:52.671 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:52.671 slat (nsec): min=9299, max=42538, avg=13199.62, stdev=4915.65 00:10:52.671 clat (usec): min=127, max=1460, avg=183.49, stdev=63.36 00:10:52.671 lat (usec): min=137, max=1471, avg=196.69, stdev=63.38 00:10:52.671 clat percentiles (usec): 00:10:52.671 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 145], 00:10:52.671 | 30.00th=[ 151], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 182], 00:10:52.671 | 70.00th=[ 200], 80.00th=[ 215], 90.00th=[ 237], 95.00th=[ 245], 00:10:52.671 | 99.00th=[ 388], 99.50th=[ 445], 99.90th=[ 979], 99.95th=[ 1467], 00:10:52.671 | 99.99th=[ 1467] 00:10:52.671 bw ( KiB/s): min= 8192, max= 8192, per=68.33%, avg=8192.00, stdev= 0.00, samples=1 00:10:52.671 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:52.671 lat (usec) : 250=85.38%, 500=14.17%, 750=0.04%, 1000=0.07% 00:10:52.671 lat (msec) : 2=0.04%, 50=0.32% 00:10:52.671 cpu : usr=2.00%, sys=3.30%, ctx=2848, majf=0, minf=1 00:10:52.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.671 issued rwts: total=1309,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.671 job2: (groupid=0, jobs=1): err= 0: pid=1124365: Tue Oct 8 18:21:21 2024 00:10:52.671 read: IOPS=25, BW=102KiB/s (104kB/s)(104KiB/1024msec) 00:10:52.671 slat (nsec): min=7049, max=41625, avg=16631.19, stdev=7431.48 00:10:52.671 clat (usec): min=403, max=41109, avg=34717.08, stdev=14913.49 00:10:52.671 lat (usec): min=411, max=41134, avg=34733.72, stdev=14917.52 00:10:52.671 clat percentiles (usec): 00:10:52.671 | 1.00th=[ 404], 5.00th=[ 408], 10.00th=[ 441], 20.00th=[40633], 00:10:52.671 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:52.671 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:52.671 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:52.671 | 99.99th=[41157] 00:10:52.671 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:10:52.671 slat (nsec): min=7669, max=51569, avg=9090.62, stdev=2667.04 00:10:52.671 clat (usec): min=161, max=576, avg=225.03, stdev=33.09 00:10:52.671 lat (usec): min=170, max=592, avg=234.12, stdev=33.55 00:10:52.671 clat percentiles (usec): 00:10:52.671 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 208], 00:10:52.671 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 231], 00:10:52.671 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 253], 00:10:52.671 | 99.00th=[ 359], 99.50th=[ 412], 99.90th=[ 578], 99.95th=[ 578], 00:10:52.671 | 99.99th=[ 578] 00:10:52.671 bw ( KiB/s): min= 4096, max= 4096, per=34.17%, avg=4096.00, stdev= 0.00, samples=1 00:10:52.671 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:52.671 lat (usec) : 250=88.66%, 500=7.06%, 750=0.19% 00:10:52.671 lat (msec) : 50=4.09% 00:10:52.671 cpu : usr=0.20%, sys=0.49%, ctx=538, majf=0, minf=1 00:10:52.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.671 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.671 job3: (groupid=0, jobs=1): err= 0: pid=1124366: Tue Oct 8 18:21:21 2024 00:10:52.671 read: IOPS=23, BW=93.7KiB/s (95.9kB/s)(96.0KiB/1025msec) 00:10:52.671 slat (nsec): min=7313, max=34790, avg=15954.29, stdev=4809.47 00:10:52.671 clat (usec): min=344, max=41056, avg=37594.82, stdev=11471.71 00:10:52.671 lat (usec): min=352, max=41066, avg=37610.77, stdev=11472.90 00:10:52.671 clat percentiles (usec): 00:10:52.671 | 1.00th=[ 347], 5.00th=[ 355], 10.00th=[41157], 20.00th=[41157], 00:10:52.671 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:52.671 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:52.671 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:52.671 | 99.99th=[41157] 00:10:52.671 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:10:52.671 slat (usec): min=8, max=1107, avg=12.20, stdev=48.55 00:10:52.671 clat (usec): min=152, max=437, avg=224.32, stdev=25.75 00:10:52.671 lat (usec): min=160, max=1313, avg=236.52, stdev=54.31 00:10:52.671 clat percentiles (usec): 00:10:52.671 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 200], 20.00th=[ 210], 00:10:52.672 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:10:52.672 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 251], 00:10:52.672 | 99.00th=[ 297], 99.50th=[ 412], 99.90th=[ 437], 99.95th=[ 437], 00:10:52.672 | 99.99th=[ 437] 00:10:52.672 bw ( KiB/s): min= 4096, max= 4096, per=34.17%, avg=4096.00, stdev= 0.00, samples=1 00:10:52.672 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:52.672 lat (usec) : 250=90.49%, 500=5.41% 00:10:52.672 lat (msec) : 50=4.10% 00:10:52.672 cpu : usr=0.39%, sys=0.29%, ctx=539, majf=0, minf=1 00:10:52.672 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.672 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.672 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.672 00:10:52.672 Run status group 0 (all jobs): 00:10:52.672 READ: bw=5452KiB/s (5583kB/s), 93.7KiB/s-5231KiB/s (95.9kB/s-5356kB/s), io=5588KiB (5722kB), run=1001-1025msec 00:10:52.672 WRITE: bw=11.7MiB/s (12.3MB/s), 1998KiB/s-6138KiB/s (2046kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1025msec 00:10:52.672 00:10:52.672 Disk stats (read/write): 00:10:52.672 nvme0n1: ios=56/512, merge=0/0, ticks=1564/92, in_queue=1656, util=84.67% 00:10:52.672 nvme0n2: ios=1065/1024, merge=0/0, ticks=1117/197, in_queue=1314, util=88.69% 00:10:52.672 nvme0n3: ios=78/512, merge=0/0, ticks=777/112, in_queue=889, util=94.22% 00:10:52.672 nvme0n4: ios=93/512, merge=0/0, ticks=1287/116, in_queue=1403, util=94.16% 00:10:52.672 18:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:52.672 [global] 00:10:52.672 thread=1 00:10:52.672 invalidate=1 00:10:52.672 rw=randwrite 00:10:52.672 time_based=1 00:10:52.672 runtime=1 00:10:52.672 ioengine=libaio 00:10:52.672 direct=1 00:10:52.672 bs=4096 00:10:52.672 iodepth=1 00:10:52.672 norandommap=0 00:10:52.672 numjobs=1 00:10:52.672 00:10:52.672 verify_dump=1 00:10:52.672 verify_backlog=512 00:10:52.672 verify_state_save=0 00:10:52.672 do_verify=1 00:10:52.672 verify=crc32c-intel 00:10:52.672 [job0] 00:10:52.672 filename=/dev/nvme0n1 00:10:52.672 [job1] 00:10:52.672 filename=/dev/nvme0n2 00:10:52.672 [job2] 00:10:52.672 filename=/dev/nvme0n3 00:10:52.672 [job3] 00:10:52.672 filename=/dev/nvme0n4 00:10:52.672 Could not set queue depth (nvme0n1) 00:10:52.672 Could not set queue depth (nvme0n2) 00:10:52.672 Could not set queue depth (nvme0n3) 00:10:52.672 Could not set queue depth (nvme0n4) 00:10:52.930 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.930 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.930 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.930 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.930 fio-3.35 00:10:52.930 Starting 4 threads 00:10:54.304 00:10:54.304 job0: (groupid=0, jobs=1): err= 0: pid=1124591: Tue Oct 8 18:21:22 2024 00:10:54.304 read: IOPS=20, BW=82.5KiB/s (84.5kB/s)(84.0KiB/1018msec) 00:10:54.304 slat (nsec): min=10100, max=21506, avg=16240.71, stdev=2203.55 00:10:54.304 clat (usec): min=40826, max=41082, avg=40969.90, stdev=78.56 00:10:54.304 lat (usec): min=40836, max=41103, avg=40986.15, stdev=79.15 00:10:54.304 clat percentiles (usec): 00:10:54.304 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:54.304 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:54.304 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:54.304 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:54.304 | 99.99th=[41157] 00:10:54.304 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:10:54.304 slat (nsec): min=6746, max=48432, avg=12602.36, stdev=5499.15 00:10:54.304 clat (usec): min=143, max=702, avg=291.15, stdev=100.20 00:10:54.304 lat (usec): min=153, max=720, avg=303.76, stdev=102.26 00:10:54.304 clat percentiles (usec): 00:10:54.304 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 174], 00:10:54.304 | 30.00th=[ 208], 40.00th=[ 262], 50.00th=[ 306], 60.00th=[ 334], 00:10:54.304 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 412], 95.00th=[ 437], 00:10:54.304 | 99.00th=[ 490], 99.50th=[ 553], 99.90th=[ 701], 99.95th=[ 701], 00:10:54.304 | 99.99th=[ 701] 00:10:54.304 bw ( KiB/s): min= 4096, max= 4096, per=19.61%, avg=4096.00, stdev= 0.00, samples=1 00:10:54.304 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:54.304 lat (usec) : 250=37.90%, 500=57.22%, 750=0.94% 00:10:54.304 lat (msec) : 50=3.94% 00:10:54.304 cpu : usr=0.29%, sys=0.59%, ctx=534, majf=0, minf=1 00:10:54.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.304 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.304 job1: (groupid=0, jobs=1): err= 0: pid=1124592: Tue Oct 8 18:21:22 2024 00:10:54.304 read: IOPS=1677, BW=6709KiB/s (6870kB/s)(6924KiB/1032msec) 00:10:54.304 slat (nsec): min=6861, max=29289, avg=8946.21, stdev=2485.93 00:10:54.304 clat (usec): min=168, max=41283, avg=306.28, stdev=1391.77 00:10:54.304 lat (usec): min=177, max=41296, avg=315.22, stdev=1391.97 00:10:54.304 clat percentiles (usec): 00:10:54.304 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 221], 00:10:54.304 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 251], 00:10:54.304 | 70.00th=[ 262], 80.00th=[ 285], 90.00th=[ 322], 95.00th=[ 375], 00:10:54.304 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[41157], 99.95th=[41157], 00:10:54.304 | 99.99th=[41157] 00:10:54.304 write: IOPS=1984, BW=7938KiB/s (8128kB/s)(8192KiB/1032msec); 0 zone resets 00:10:54.304 slat (nsec): min=8780, max=71081, avg=12002.01, stdev=4264.32 00:10:54.304 clat (usec): min=125, max=1955, avg=220.08, stdev=81.87 00:10:54.304 lat (usec): min=136, max=1966, avg=232.08, stdev=82.93 00:10:54.304 clat percentiles (usec): 00:10:54.304 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 159], 20.00th=[ 167], 00:10:54.304 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 204], 00:10:54.304 | 70.00th=[ 221], 80.00th=[ 277], 90.00th=[ 338], 95.00th=[ 383], 00:10:54.304 | 99.00th=[ 424], 99.50th=[ 453], 99.90th=[ 766], 99.95th=[ 848], 00:10:54.304 | 99.99th=[ 1958] 00:10:54.304 bw ( KiB/s): min= 8192, max= 8192, per=39.22%, avg=8192.00, stdev= 0.00, samples=2 00:10:54.304 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:54.304 lat (usec) : 250=68.30%, 500=31.01%, 750=0.56%, 1000=0.05% 00:10:54.304 lat (msec) : 2=0.03%, 50=0.05% 00:10:54.304 cpu : usr=2.23%, sys=5.63%, ctx=3780, majf=0, minf=1 00:10:54.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.305 issued rwts: total=1731,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.305 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.305 job2: (groupid=0, jobs=1): err= 0: pid=1124593: Tue Oct 8 18:21:22 2024 00:10:54.305 read: IOPS=1564, BW=6258KiB/s (6409kB/s)(6296KiB/1006msec) 00:10:54.305 slat (nsec): min=5265, max=35976, avg=9757.73, stdev=4734.18 00:10:54.305 clat (usec): min=183, max=41646, avg=356.93, stdev=2056.51 00:10:54.305 lat (usec): min=188, max=41662, avg=366.68, stdev=2056.75 00:10:54.305 clat percentiles (usec): 00:10:54.305 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 210], 00:10:54.305 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 243], 00:10:54.305 | 70.00th=[ 262], 80.00th=[ 285], 90.00th=[ 318], 95.00th=[ 412], 00:10:54.305 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[41157], 99.95th=[41681], 00:10:54.305 | 99.99th=[41681] 00:10:54.305 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:10:54.305 slat (nsec): min=6647, max=44650, avg=10179.68, stdev=4310.42 00:10:54.305 clat (usec): min=136, max=405, avg=194.19, stdev=41.77 00:10:54.305 lat (usec): min=144, max=423, avg=204.37, stdev=43.20 00:10:54.305 clat percentiles (usec): 00:10:54.305 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 163], 00:10:54.305 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 192], 00:10:54.305 | 70.00th=[ 200], 80.00th=[ 217], 90.00th=[ 249], 95.00th=[ 293], 00:10:54.305 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 396], 00:10:54.305 | 99.99th=[ 408] 00:10:54.305 bw ( KiB/s): min= 6184, max=10200, per=39.22%, avg=8192.00, stdev=2839.74, samples=2 00:10:54.305 iops : min= 1546, max= 2550, avg=2048.00, stdev=709.94, samples=2 00:10:54.305 lat (usec) : 250=79.68%, 500=19.33%, 750=0.88% 00:10:54.305 lat (msec) : 50=0.11% 00:10:54.305 cpu : usr=1.69%, sys=3.88%, ctx=3622, majf=0, minf=2 00:10:54.305 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.305 issued rwts: total=1574,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.305 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.305 job3: (groupid=0, jobs=1): err= 0: pid=1124594: Tue Oct 8 18:21:22 2024 00:10:54.305 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:54.305 slat (nsec): min=7576, max=48255, avg=12618.19, stdev=4283.71 00:10:54.305 clat (usec): min=203, max=41247, avg=1472.50, stdev=6778.90 00:10:54.305 lat (usec): min=211, max=41263, avg=1485.11, stdev=6779.41 00:10:54.305 clat percentiles (usec): 00:10:54.305 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 235], 20.00th=[ 245], 00:10:54.305 | 30.00th=[ 258], 40.00th=[ 281], 50.00th=[ 297], 60.00th=[ 314], 00:10:54.305 | 70.00th=[ 326], 80.00th=[ 355], 90.00th=[ 375], 95.00th=[ 404], 00:10:54.305 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:54.305 | 99.99th=[41157] 00:10:54.305 write: IOPS=780, BW=3121KiB/s (3196kB/s)(3124KiB/1001msec); 0 zone resets 00:10:54.305 slat (nsec): min=10288, max=53177, avg=15540.41, stdev=5703.18 00:10:54.305 clat (usec): min=145, max=759, avg=284.65, stdev=105.73 00:10:54.305 lat (usec): min=157, max=773, avg=300.19, stdev=106.56 00:10:54.305 clat percentiles (usec): 00:10:54.305 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 190], 00:10:54.305 | 30.00th=[ 219], 40.00th=[ 239], 50.00th=[ 253], 60.00th=[ 281], 00:10:54.305 | 70.00th=[ 322], 80.00th=[ 396], 90.00th=[ 441], 95.00th=[ 474], 00:10:54.305 | 99.00th=[ 562], 99.50th=[ 603], 99.90th=[ 758], 99.95th=[ 758], 00:10:54.305 | 99.99th=[ 758] 00:10:54.305 bw ( KiB/s): min= 4096, max= 4096, per=19.61%, avg=4096.00, stdev= 0.00, samples=1 00:10:54.305 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:54.305 lat (usec) : 250=39.37%, 500=57.54%, 750=1.86%, 1000=0.08% 00:10:54.305 lat (msec) : 50=1.16% 00:10:54.305 cpu : usr=0.70%, sys=2.10%, ctx=1294, majf=0, minf=1 00:10:54.305 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.305 issued rwts: total=512,781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.305 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.305 00:10:54.305 Run status group 0 (all jobs): 00:10:54.305 READ: bw=14.5MiB/s (15.2MB/s), 82.5KiB/s-6709KiB/s (84.5kB/s-6870kB/s), io=15.0MiB (15.7MB), run=1001-1032msec 00:10:54.305 WRITE: bw=20.4MiB/s (21.4MB/s), 2012KiB/s-8143KiB/s (2060kB/s-8339kB/s), io=21.1MiB (22.1MB), run=1001-1032msec 00:10:54.305 00:10:54.305 Disk stats (read/write): 00:10:54.305 nvme0n1: ios=69/512, merge=0/0, ticks=1661/148, in_queue=1809, util=97.90% 00:10:54.305 nvme0n2: ios=1567/1862, merge=0/0, ticks=636/411, in_queue=1047, util=96.65% 00:10:54.305 nvme0n3: ios=1566/2048, merge=0/0, ticks=384/397, in_queue=781, util=88.91% 00:10:54.305 nvme0n4: ios=281/512, merge=0/0, ticks=849/166, in_queue=1015, util=96.52% 00:10:54.305 18:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:54.305 [global] 00:10:54.305 thread=1 00:10:54.305 invalidate=1 00:10:54.305 rw=write 00:10:54.305 time_based=1 00:10:54.305 runtime=1 00:10:54.305 ioengine=libaio 00:10:54.305 direct=1 00:10:54.305 bs=4096 00:10:54.305 iodepth=128 00:10:54.305 norandommap=0 00:10:54.305 numjobs=1 00:10:54.305 00:10:54.305 verify_dump=1 00:10:54.305 verify_backlog=512 00:10:54.305 verify_state_save=0 00:10:54.305 do_verify=1 00:10:54.305 verify=crc32c-intel 00:10:54.305 [job0] 00:10:54.305 filename=/dev/nvme0n1 00:10:54.305 [job1] 00:10:54.305 filename=/dev/nvme0n2 00:10:54.305 [job2] 00:10:54.305 filename=/dev/nvme0n3 00:10:54.305 [job3] 00:10:54.305 filename=/dev/nvme0n4 00:10:54.305 Could not set queue depth (nvme0n1) 00:10:54.305 Could not set queue depth (nvme0n2) 00:10:54.305 Could not set queue depth (nvme0n3) 00:10:54.305 Could not set queue depth (nvme0n4) 00:10:54.563 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.563 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.563 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.563 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.563 fio-3.35 00:10:54.563 Starting 4 threads 00:10:55.973 00:10:55.973 job0: (groupid=0, jobs=1): err= 0: pid=1124944: Tue Oct 8 18:21:24 2024 00:10:55.973 read: IOPS=3410, BW=13.3MiB/s (14.0MB/s)(13.9MiB/1047msec) 00:10:55.973 slat (usec): min=3, max=11231, avg=130.23, stdev=754.00 00:10:55.973 clat (usec): min=8146, max=56010, avg=17678.86, stdev=8465.86 00:10:55.973 lat (usec): min=8160, max=61635, avg=17809.09, stdev=8513.78 00:10:55.973 clat percentiles (usec): 00:10:55.973 | 1.00th=[ 9110], 5.00th=[10945], 10.00th=[11338], 20.00th=[11731], 00:10:55.973 | 30.00th=[12125], 40.00th=[13829], 50.00th=[16319], 60.00th=[17171], 00:10:55.973 | 70.00th=[19006], 80.00th=[21103], 90.00th=[26084], 95.00th=[30016], 00:10:55.973 | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:10:55.973 | 99.99th=[55837] 00:10:55.973 write: IOPS=3423, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1047msec); 0 zone resets 00:10:55.973 slat (usec): min=4, max=6021, avg=143.11, stdev=552.93 00:10:55.973 clat (usec): min=4563, max=48020, avg=19285.96, stdev=9309.68 00:10:55.973 lat (usec): min=4570, max=48037, avg=19429.07, stdev=9366.16 00:10:55.973 clat percentiles (usec): 00:10:55.973 | 1.00th=[ 4686], 5.00th=[10945], 10.00th=[11207], 20.00th=[11600], 00:10:55.973 | 30.00th=[11863], 40.00th=[12518], 50.00th=[18220], 60.00th=[20055], 00:10:55.973 | 70.00th=[21890], 80.00th=[25035], 90.00th=[33162], 95.00th=[40633], 00:10:55.973 | 99.00th=[46924], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:10:55.973 | 99.99th=[47973] 00:10:55.973 bw ( KiB/s): min=13360, max=15312, per=22.63%, avg=14336.00, stdev=1380.27, samples=2 00:10:55.973 iops : min= 3340, max= 3828, avg=3584.00, stdev=345.07, samples=2 00:10:55.973 lat (msec) : 10=3.26%, 20=65.34%, 50=30.54%, 100=0.87% 00:10:55.973 cpu : usr=2.96%, sys=4.68%, ctx=458, majf=0, minf=1 00:10:55.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:55.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.973 issued rwts: total=3571,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.973 job1: (groupid=0, jobs=1): err= 0: pid=1124945: Tue Oct 8 18:21:24 2024 00:10:55.973 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:10:55.973 slat (usec): min=2, max=8371, avg=126.79, stdev=748.03 00:10:55.973 clat (usec): min=7927, max=47696, avg=15436.61, stdev=3286.34 00:10:55.973 lat (usec): min=7935, max=54306, avg=15563.40, stdev=3353.94 00:10:55.973 clat percentiles (usec): 00:10:55.973 | 1.00th=[ 8848], 5.00th=[11207], 10.00th=[11994], 20.00th=[13042], 00:10:55.973 | 30.00th=[13566], 40.00th=[14222], 50.00th=[14746], 60.00th=[15401], 00:10:55.973 | 70.00th=[16712], 80.00th=[17695], 90.00th=[19792], 95.00th=[21890], 00:10:55.973 | 99.00th=[23987], 99.50th=[25297], 99.90th=[26346], 99.95th=[47449], 00:10:55.973 | 99.99th=[47449] 00:10:55.973 write: IOPS=3792, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1006msec); 0 zone resets 00:10:55.973 slat (usec): min=4, max=8581, avg=133.99, stdev=487.79 00:10:55.973 clat (usec): min=5198, max=35667, avg=18747.75, stdev=6004.96 00:10:55.973 lat (usec): min=6342, max=35680, avg=18881.75, stdev=6041.12 00:10:55.973 clat percentiles (usec): 00:10:55.973 | 1.00th=[ 7111], 5.00th=[11076], 10.00th=[11994], 20.00th=[12518], 00:10:55.973 | 30.00th=[12911], 40.00th=[16057], 50.00th=[19792], 60.00th=[20841], 00:10:55.973 | 70.00th=[22414], 80.00th=[24773], 90.00th=[26084], 95.00th=[28967], 00:10:55.973 | 99.00th=[31851], 99.50th=[32375], 99.90th=[35914], 99.95th=[35914], 00:10:55.973 | 99.99th=[35914] 00:10:55.973 bw ( KiB/s): min=13120, max=16384, per=23.29%, avg=14752.00, stdev=2308.00, samples=2 00:10:55.973 iops : min= 3280, max= 4096, avg=3688.00, stdev=577.00, samples=2 00:10:55.973 lat (msec) : 10=3.07%, 20=68.79%, 50=28.14% 00:10:55.973 cpu : usr=4.08%, sys=5.97%, ctx=508, majf=0, minf=1 00:10:55.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:55.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.973 issued rwts: total=3584,3815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.973 job2: (groupid=0, jobs=1): err= 0: pid=1124946: Tue Oct 8 18:21:24 2024 00:10:55.973 read: IOPS=4164, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1011msec) 00:10:55.973 slat (usec): min=3, max=12601, avg=113.74, stdev=786.32 00:10:55.973 clat (usec): min=3869, max=27799, avg=14121.83, stdev=3869.88 00:10:55.973 lat (usec): min=3877, max=27821, avg=14235.57, stdev=3911.89 00:10:55.973 clat percentiles (usec): 00:10:55.973 | 1.00th=[ 5342], 5.00th=[ 8029], 10.00th=[11469], 20.00th=[11994], 00:10:55.973 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[13829], 00:10:55.973 | 70.00th=[14615], 80.00th=[17433], 90.00th=[19530], 95.00th=[22152], 00:10:55.973 | 99.00th=[25560], 99.50th=[26346], 99.90th=[27657], 99.95th=[27657], 00:10:55.973 | 99.99th=[27919] 00:10:55.973 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:10:55.973 slat (usec): min=4, max=11240, avg=99.61, stdev=491.51 00:10:55.973 clat (usec): min=1344, max=59824, avg=14940.63, stdev=9303.60 00:10:55.973 lat (usec): min=1387, max=59832, avg=15040.24, stdev=9348.09 00:10:55.973 clat percentiles (usec): 00:10:55.973 | 1.00th=[ 3720], 5.00th=[ 5735], 10.00th=[ 8717], 20.00th=[11469], 00:10:55.973 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:10:55.973 | 70.00th=[13435], 80.00th=[14222], 90.00th=[23200], 95.00th=[38536], 00:10:55.973 | 99.00th=[54789], 99.50th=[57410], 99.90th=[60031], 99.95th=[60031], 00:10:55.973 | 99.99th=[60031] 00:10:55.973 bw ( KiB/s): min=16384, max=19856, per=28.60%, avg=18120.00, stdev=2455.07, samples=2 00:10:55.973 iops : min= 4096, max= 4964, avg=4530.00, stdev=613.77, samples=2 00:10:55.973 lat (msec) : 2=0.10%, 4=0.92%, 10=10.22%, 20=78.46%, 50=9.25% 00:10:55.973 lat (msec) : 100=1.04% 00:10:55.973 cpu : usr=5.15%, sys=6.24%, ctx=573, majf=0, minf=1 00:10:55.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:55.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.973 issued rwts: total=4210,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.973 job3: (groupid=0, jobs=1): err= 0: pid=1124947: Tue Oct 8 18:21:24 2024 00:10:55.973 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:10:55.973 slat (usec): min=2, max=21015, avg=115.36, stdev=764.56 00:10:55.973 clat (usec): min=7301, max=68358, avg=15724.75, stdev=7246.55 00:10:55.973 lat (usec): min=7306, max=68362, avg=15840.11, stdev=7296.17 00:10:55.973 clat percentiles (usec): 00:10:55.973 | 1.00th=[ 8094], 5.00th=[10159], 10.00th=[11207], 20.00th=[12256], 00:10:55.973 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13829], 60.00th=[14222], 00:10:55.973 | 70.00th=[14615], 80.00th=[15270], 90.00th=[26084], 95.00th=[35390], 00:10:55.973 | 99.00th=[38536], 99.50th=[43779], 99.90th=[68682], 99.95th=[68682], 00:10:55.973 | 99.99th=[68682] 00:10:55.973 write: IOPS=4555, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1004msec); 0 zone resets 00:10:55.973 slat (usec): min=3, max=15150, avg=109.14, stdev=665.24 00:10:55.973 clat (usec): min=3255, max=37838, avg=13064.58, stdev=2841.37 00:10:55.973 lat (usec): min=3260, max=37844, avg=13173.72, stdev=2866.09 00:10:55.973 clat percentiles (usec): 00:10:55.973 | 1.00th=[ 6980], 5.00th=[ 8979], 10.00th=[10552], 20.00th=[12125], 00:10:55.973 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13304], 00:10:55.973 | 70.00th=[13435], 80.00th=[14222], 90.00th=[14746], 95.00th=[15270], 00:10:55.973 | 99.00th=[26084], 99.50th=[30278], 99.90th=[37487], 99.95th=[38011], 00:10:55.973 | 99.99th=[38011] 00:10:55.973 bw ( KiB/s): min=16408, max=19168, per=28.08%, avg=17788.00, stdev=1951.61, samples=2 00:10:55.973 iops : min= 4102, max= 4792, avg=4447.00, stdev=487.90, samples=2 00:10:55.973 lat (msec) : 4=0.42%, 10=5.39%, 20=87.10%, 50=6.97%, 100=0.13% 00:10:55.973 cpu : usr=2.39%, sys=4.69%, ctx=309, majf=0, minf=1 00:10:55.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:55.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.973 issued rwts: total=4096,4574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.973 00:10:55.973 Run status group 0 (all jobs): 00:10:55.973 READ: bw=57.7MiB/s (60.5MB/s), 13.3MiB/s-16.3MiB/s (14.0MB/s-17.1MB/s), io=60.4MiB (63.3MB), run=1004-1047msec 00:10:55.973 WRITE: bw=61.9MiB/s (64.9MB/s), 13.4MiB/s-17.8MiB/s (14.0MB/s-18.7MB/s), io=64.8MiB (67.9MB), run=1004-1047msec 00:10:55.973 00:10:55.973 Disk stats (read/write): 00:10:55.973 nvme0n1: ios=2586/2911, merge=0/0, ticks=15952/19589, in_queue=35541, util=96.49% 00:10:55.973 nvme0n2: ios=3108/3072, merge=0/0, ticks=22959/28591, in_queue=51550, util=99.80% 00:10:55.973 nvme0n3: ios=3640/3903, merge=0/0, ticks=49142/55114, in_queue=104256, util=90.17% 00:10:55.973 nvme0n4: ios=3662/4096, merge=0/0, ticks=18480/16712, in_queue=35192, util=97.26% 00:10:55.973 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:55.973 [global] 00:10:55.973 thread=1 00:10:55.973 invalidate=1 00:10:55.973 rw=randwrite 00:10:55.973 time_based=1 00:10:55.973 runtime=1 00:10:55.973 ioengine=libaio 00:10:55.973 direct=1 00:10:55.973 bs=4096 00:10:55.973 iodepth=128 00:10:55.973 norandommap=0 00:10:55.973 numjobs=1 00:10:55.973 00:10:55.973 verify_dump=1 00:10:55.973 verify_backlog=512 00:10:55.973 verify_state_save=0 00:10:55.973 do_verify=1 00:10:55.973 verify=crc32c-intel 00:10:55.973 [job0] 00:10:55.973 filename=/dev/nvme0n1 00:10:55.973 [job1] 00:10:55.973 filename=/dev/nvme0n2 00:10:55.973 [job2] 00:10:55.973 filename=/dev/nvme0n3 00:10:55.973 [job3] 00:10:55.973 filename=/dev/nvme0n4 00:10:55.973 Could not set queue depth (nvme0n1) 00:10:55.973 Could not set queue depth (nvme0n2) 00:10:55.973 Could not set queue depth (nvme0n3) 00:10:55.973 Could not set queue depth (nvme0n4) 00:10:55.973 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.974 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.974 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.974 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.974 fio-3.35 00:10:55.974 Starting 4 threads 00:10:57.348 00:10:57.348 job0: (groupid=0, jobs=1): err= 0: pid=1125176: Tue Oct 8 18:21:25 2024 00:10:57.348 read: IOPS=2518, BW=9.84MiB/s (10.3MB/s)(10.2MiB/1042msec) 00:10:57.348 slat (usec): min=2, max=20411, avg=153.65, stdev=1072.48 00:10:57.348 clat (usec): min=7287, max=50907, avg=19011.46, stdev=8359.21 00:10:57.348 lat (usec): min=7291, max=50912, avg=19165.11, stdev=8422.98 00:10:57.348 clat percentiles (usec): 00:10:57.348 | 1.00th=[ 7308], 5.00th=[10159], 10.00th=[11469], 20.00th=[12125], 00:10:57.348 | 30.00th=[13960], 40.00th=[14484], 50.00th=[16188], 60.00th=[18220], 00:10:57.348 | 70.00th=[21890], 80.00th=[25035], 90.00th=[29754], 95.00th=[34341], 00:10:57.348 | 99.00th=[50594], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:10:57.348 | 99.99th=[51119] 00:10:57.348 write: IOPS=2948, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1042msec); 0 zone resets 00:10:57.348 slat (usec): min=3, max=16277, avg=188.55, stdev=924.42 00:10:57.348 clat (usec): min=7071, max=83918, avg=26659.26, stdev=12316.93 00:10:57.348 lat (usec): min=7087, max=83925, avg=26847.81, stdev=12389.88 00:10:57.348 clat percentiles (usec): 00:10:57.348 | 1.00th=[10945], 5.00th=[12518], 10.00th=[15795], 20.00th=[19530], 00:10:57.348 | 30.00th=[20055], 40.00th=[21890], 50.00th=[23200], 60.00th=[24511], 00:10:57.348 | 70.00th=[28181], 80.00th=[31327], 90.00th=[39584], 95.00th=[55837], 00:10:57.348 | 99.00th=[73925], 99.50th=[80217], 99.90th=[84411], 99.95th=[84411], 00:10:57.348 | 99.99th=[84411] 00:10:57.348 bw ( KiB/s): min= 9144, max=14928, per=21.73%, avg=12036.00, stdev=4089.91, samples=2 00:10:57.348 iops : min= 2286, max= 3732, avg=3009.00, stdev=1022.48, samples=2 00:10:57.348 lat (msec) : 10=1.88%, 20=41.77%, 50=51.33%, 100=5.02% 00:10:57.348 cpu : usr=1.63%, sys=3.46%, ctx=355, majf=0, minf=1 00:10:57.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:57.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.348 issued rwts: total=2624,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.348 job1: (groupid=0, jobs=1): err= 0: pid=1125177: Tue Oct 8 18:21:25 2024 00:10:57.348 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:57.348 slat (usec): min=2, max=23951, avg=101.99, stdev=782.73 00:10:57.348 clat (usec): min=4347, max=49184, avg=13393.34, stdev=5872.22 00:10:57.348 lat (usec): min=4356, max=49191, avg=13495.33, stdev=5939.32 00:10:57.348 clat percentiles (usec): 00:10:57.348 | 1.00th=[ 4424], 5.00th=[ 7832], 10.00th=[ 8717], 20.00th=[10028], 00:10:57.348 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11469], 60.00th=[12387], 00:10:57.348 | 70.00th=[13960], 80.00th=[16057], 90.00th=[20841], 95.00th=[24773], 00:10:57.348 | 99.00th=[30278], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:10:57.348 | 99.99th=[49021] 00:10:57.348 write: IOPS=4700, BW=18.4MiB/s (19.3MB/s)(18.4MiB/1003msec); 0 zone resets 00:10:57.348 slat (usec): min=3, max=12767, avg=91.82, stdev=615.47 00:10:57.348 clat (usec): min=241, max=62934, avg=13862.91, stdev=10398.37 00:10:57.348 lat (usec): min=506, max=62943, avg=13954.73, stdev=10453.77 00:10:57.348 clat percentiles (usec): 00:10:57.348 | 1.00th=[ 996], 5.00th=[ 3294], 10.00th=[ 5735], 20.00th=[ 8717], 00:10:57.348 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10814], 60.00th=[11076], 00:10:57.348 | 70.00th=[11863], 80.00th=[14091], 90.00th=[30016], 95.00th=[35914], 00:10:57.348 | 99.00th=[55837], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:10:57.348 | 99.99th=[63177] 00:10:57.348 bw ( KiB/s): min=16432, max=20480, per=33.32%, avg=18456.00, stdev=2862.37, samples=2 00:10:57.348 iops : min= 4108, max= 5120, avg=4614.00, stdev=715.59, samples=2 00:10:57.348 lat (usec) : 250=0.01%, 500=0.02%, 750=0.16%, 1000=0.33% 00:10:57.348 lat (msec) : 2=0.61%, 4=1.93%, 10=25.26%, 20=56.96%, 50=14.06% 00:10:57.348 lat (msec) : 100=0.65% 00:10:57.348 cpu : usr=3.19%, sys=6.69%, ctx=429, majf=0, minf=1 00:10:57.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:57.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.348 issued rwts: total=4608,4715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.348 job2: (groupid=0, jobs=1): err= 0: pid=1125178: Tue Oct 8 18:21:25 2024 00:10:57.348 read: IOPS=3276, BW=12.8MiB/s (13.4MB/s)(13.3MiB/1043msec) 00:10:57.348 slat (usec): min=2, max=18197, avg=155.97, stdev=929.43 00:10:57.348 clat (usec): min=6452, max=71994, avg=20888.76, stdev=14179.68 00:10:57.348 lat (usec): min=6456, max=72002, avg=21044.73, stdev=14229.47 00:10:57.348 clat percentiles (usec): 00:10:57.348 | 1.00th=[ 7242], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[10945], 00:10:57.348 | 30.00th=[11994], 40.00th=[12387], 50.00th=[15926], 60.00th=[20055], 00:10:57.348 | 70.00th=[22676], 80.00th=[24249], 90.00th=[39584], 95.00th=[58459], 00:10:57.348 | 99.00th=[68682], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:10:57.348 | 99.99th=[71828] 00:10:57.348 write: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1043msec); 0 zone resets 00:10:57.348 slat (usec): min=4, max=7933, avg=123.76, stdev=675.89 00:10:57.348 clat (usec): min=6272, max=41331, avg=16877.45, stdev=7194.45 00:10:57.348 lat (usec): min=6279, max=41350, avg=17001.21, stdev=7224.49 00:10:57.348 clat percentiles (usec): 00:10:57.348 | 1.00th=[ 8094], 5.00th=[10552], 10.00th=[10945], 20.00th=[11731], 00:10:57.348 | 30.00th=[12125], 40.00th=[12387], 50.00th=[13042], 60.00th=[15139], 00:10:57.348 | 70.00th=[19006], 80.00th=[22676], 90.00th=[28967], 95.00th=[32113], 00:10:57.348 | 99.00th=[37487], 99.50th=[38011], 99.90th=[41157], 99.95th=[41157], 00:10:57.348 | 99.99th=[41157] 00:10:57.348 bw ( KiB/s): min=12288, max=16384, per=25.88%, avg=14336.00, stdev=2896.31, samples=2 00:10:57.348 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:10:57.348 lat (msec) : 10=5.06%, 20=62.19%, 50=29.60%, 100=3.16% 00:10:57.348 cpu : usr=2.30%, sys=5.09%, ctx=376, majf=0, minf=1 00:10:57.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:57.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.348 issued rwts: total=3417,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.348 job3: (groupid=0, jobs=1): err= 0: pid=1125180: Tue Oct 8 18:21:25 2024 00:10:57.348 read: IOPS=3039, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1004msec) 00:10:57.348 slat (usec): min=2, max=21505, avg=157.01, stdev=1077.39 00:10:57.348 clat (usec): min=1300, max=57154, avg=19718.62, stdev=10390.69 00:10:57.348 lat (usec): min=4358, max=58791, avg=19875.63, stdev=10484.14 00:10:57.348 clat percentiles (usec): 00:10:57.348 | 1.00th=[ 4686], 5.00th=[10814], 10.00th=[11600], 20.00th=[12649], 00:10:57.348 | 30.00th=[13173], 40.00th=[13698], 50.00th=[14484], 60.00th=[17433], 00:10:57.348 | 70.00th=[21103], 80.00th=[26870], 90.00th=[38536], 95.00th=[42730], 00:10:57.348 | 99.00th=[48497], 99.50th=[49546], 99.90th=[56886], 99.95th=[56886], 00:10:57.348 | 99.99th=[57410] 00:10:57.348 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:10:57.348 slat (usec): min=4, max=18629, avg=163.65, stdev=1043.41 00:10:57.348 clat (usec): min=6713, max=79025, avg=21827.79, stdev=12904.36 00:10:57.348 lat (usec): min=7470, max=79032, avg=21991.44, stdev=12984.04 00:10:57.348 clat percentiles (usec): 00:10:57.348 | 1.00th=[ 8717], 5.00th=[11207], 10.00th=[11600], 20.00th=[12649], 00:10:57.348 | 30.00th=[13042], 40.00th=[13829], 50.00th=[16712], 60.00th=[21103], 00:10:57.348 | 70.00th=[24773], 80.00th=[28443], 90.00th=[36963], 95.00th=[54264], 00:10:57.348 | 99.00th=[65274], 99.50th=[73925], 99.90th=[79168], 99.95th=[79168], 00:10:57.348 | 99.99th=[79168] 00:10:57.348 bw ( KiB/s): min= 8192, max=16384, per=22.18%, avg=12288.00, stdev=5792.62, samples=2 00:10:57.348 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:10:57.348 lat (msec) : 2=0.02%, 10=2.45%, 20=60.86%, 50=33.38%, 100=3.30% 00:10:57.348 cpu : usr=2.29%, sys=3.99%, ctx=223, majf=0, minf=1 00:10:57.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:57.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.349 issued rwts: total=3052,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.349 00:10:57.349 Run status group 0 (all jobs): 00:10:57.349 READ: bw=51.3MiB/s (53.8MB/s), 9.84MiB/s-17.9MiB/s (10.3MB/s-18.8MB/s), io=53.5MiB (56.1MB), run=1003-1043msec 00:10:57.349 WRITE: bw=54.1MiB/s (56.7MB/s), 11.5MiB/s-18.4MiB/s (12.1MB/s-19.3MB/s), io=56.4MiB (59.2MB), run=1003-1043msec 00:10:57.349 00:10:57.349 Disk stats (read/write): 00:10:57.349 nvme0n1: ios=2666/3072, merge=0/0, ticks=24591/31823, in_queue=56414, util=96.87% 00:10:57.349 nvme0n2: ios=3490/3584, merge=0/0, ticks=34815/32812, in_queue=67627, util=96.81% 00:10:57.349 nvme0n3: ios=3473/3584, merge=0/0, ticks=22230/19807, in_queue=42037, util=99.91% 00:10:57.349 nvme0n4: ios=2069/2048, merge=0/0, ticks=19286/22871, in_queue=42157, util=100.00% 00:10:57.349 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:57.349 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1125323 00:10:57.349 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:57.349 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:57.349 [global] 00:10:57.349 thread=1 00:10:57.349 invalidate=1 00:10:57.349 rw=read 00:10:57.349 time_based=1 00:10:57.349 runtime=10 00:10:57.349 ioengine=libaio 00:10:57.349 direct=1 00:10:57.349 bs=4096 00:10:57.349 iodepth=1 00:10:57.349 norandommap=1 00:10:57.349 numjobs=1 00:10:57.349 00:10:57.349 [job0] 00:10:57.349 filename=/dev/nvme0n1 00:10:57.349 [job1] 00:10:57.349 filename=/dev/nvme0n2 00:10:57.349 [job2] 00:10:57.349 filename=/dev/nvme0n3 00:10:57.349 [job3] 00:10:57.349 filename=/dev/nvme0n4 00:10:57.349 Could not set queue depth (nvme0n1) 00:10:57.349 Could not set queue depth (nvme0n2) 00:10:57.349 Could not set queue depth (nvme0n3) 00:10:57.349 Could not set queue depth (nvme0n4) 00:10:57.606 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.606 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.606 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.606 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.606 fio-3.35 00:10:57.606 Starting 4 threads 00:11:00.886 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:00.886 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:00.886 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=21024768, buflen=4096 00:11:00.886 fio: pid=1125414, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:01.144 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:01.144 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:01.144 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=13004800, buflen=4096 00:11:01.144 fio: pid=1125413, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:01.710 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=671744, buflen=4096 00:11:01.710 fio: pid=1125411, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:01.710 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:01.710 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:01.968 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=4591616, buflen=4096 00:11:01.968 fio: pid=1125412, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:01.968 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:01.968 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:01.968 00:11:01.968 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1125411: Tue Oct 8 18:21:30 2024 00:11:01.968 read: IOPS=42, BW=170KiB/s (174kB/s)(656KiB/3853msec) 00:11:01.968 slat (usec): min=7, max=14879, avg=109.08, stdev=1156.94 00:11:01.968 clat (usec): min=252, max=44951, avg=23229.99, stdev=20253.28 00:11:01.968 lat (usec): min=262, max=56006, avg=23339.52, stdev=20369.50 00:11:01.968 clat percentiles (usec): 00:11:01.968 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 420], 00:11:01.968 | 30.00th=[ 498], 40.00th=[ 529], 50.00th=[41157], 60.00th=[41157], 00:11:01.968 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:01.968 | 99.00th=[42730], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:11:01.968 | 99.99th=[44827] 00:11:01.968 bw ( KiB/s): min= 96, max= 432, per=1.85%, avg=170.14, stdev=132.56, samples=7 00:11:01.968 iops : min= 24, max= 108, avg=42.43, stdev=33.21, samples=7 00:11:01.968 lat (usec) : 500=30.30%, 750=13.33% 00:11:01.968 lat (msec) : 50=55.76% 00:11:01.968 cpu : usr=0.10%, sys=0.05%, ctx=170, majf=0, minf=1 00:11:01.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.968 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.968 issued rwts: total=165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.968 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1125412: Tue Oct 8 18:21:30 2024 00:11:01.968 read: IOPS=267, BW=1071KiB/s (1097kB/s)(4484KiB/4187msec) 00:11:01.968 slat (usec): min=6, max=7325, avg=19.20, stdev=246.48 00:11:01.968 clat (usec): min=180, max=41502, avg=3713.31, stdev=11272.86 00:11:01.968 lat (usec): min=188, max=45015, avg=3726.00, stdev=11287.33 00:11:01.968 clat percentiles (usec): 00:11:01.968 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 277], 00:11:01.968 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 302], 00:11:01.968 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 453], 95.00th=[41157], 00:11:01.968 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:11:01.968 | 99.99th=[41681] 00:11:01.968 bw ( KiB/s): min= 96, max= 8024, per=12.18%, avg=1116.00, stdev=2791.62, samples=8 00:11:01.968 iops : min= 24, max= 2006, avg=279.00, stdev=697.90, samples=8 00:11:01.968 lat (usec) : 250=0.89%, 500=90.02%, 750=0.53% 00:11:01.968 lat (msec) : 10=0.09%, 50=8.38% 00:11:01.968 cpu : usr=0.22%, sys=0.48%, ctx=1126, majf=0, minf=2 00:11:01.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.968 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.968 issued rwts: total=1122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.968 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1125413: Tue Oct 8 18:21:30 2024 00:11:01.968 read: IOPS=958, BW=3832KiB/s (3924kB/s)(12.4MiB/3314msec) 00:11:01.968 slat (usec): min=7, max=11659, avg=14.83, stdev=240.98 00:11:01.968 clat (usec): min=190, max=41244, avg=1017.38, stdev=5449.09 00:11:01.968 lat (usec): min=198, max=41260, avg=1032.21, stdev=5455.44 00:11:01.968 clat percentiles (usec): 00:11:01.968 | 1.00th=[ 202], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 245], 00:11:01.968 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:11:01.968 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 375], 00:11:01.968 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:01.968 | 99.99th=[41157] 00:11:01.968 bw ( KiB/s): min= 96, max=13437, per=46.04%, avg=4219.50, stdev=5567.18, samples=6 00:11:01.968 iops : min= 24, max= 3359, avg=1054.83, stdev=1391.71, samples=6 00:11:01.968 lat (usec) : 250=24.94%, 500=72.48%, 750=0.72% 00:11:01.968 lat (msec) : 50=1.83% 00:11:01.968 cpu : usr=0.24%, sys=1.06%, ctx=3179, majf=0, minf=2 00:11:01.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.968 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.968 issued rwts: total=3176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.968 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1125414: Tue Oct 8 18:21:30 2024 00:11:01.968 read: IOPS=1752, BW=7010KiB/s (7178kB/s)(20.1MiB/2929msec) 00:11:01.968 slat (nsec): min=4932, max=63545, avg=9104.07, stdev=4568.39 00:11:01.968 clat (usec): min=183, max=42000, avg=555.01, stdev=3583.53 00:11:01.968 lat (usec): min=189, max=42016, avg=564.11, stdev=3584.32 00:11:01.968 clat percentiles (usec): 00:11:01.968 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:11:01.968 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 231], 00:11:01.968 | 70.00th=[ 241], 80.00th=[ 273], 90.00th=[ 306], 95.00th=[ 322], 00:11:01.968 | 99.00th=[ 510], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:01.968 | 99.99th=[42206] 00:11:01.968 bw ( KiB/s): min= 96, max=16632, per=65.34%, avg=5988.60, stdev=8122.26, samples=5 00:11:01.968 iops : min= 24, max= 4158, avg=1497.00, stdev=2030.70, samples=5 00:11:01.968 lat (usec) : 250=72.30%, 500=26.61%, 750=0.29% 00:11:01.968 lat (msec) : 50=0.78% 00:11:01.968 cpu : usr=0.41%, sys=1.98%, ctx=5135, majf=0, minf=2 00:11:01.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.968 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.968 issued rwts: total=5134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.968 00:11:01.968 Run status group 0 (all jobs): 00:11:01.968 READ: bw=9165KiB/s (9385kB/s), 170KiB/s-7010KiB/s (174kB/s-7178kB/s), io=37.5MiB (39.3MB), run=2929-4187msec 00:11:01.968 00:11:01.968 Disk stats (read/write): 00:11:01.968 nvme0n1: ios=204/0, merge=0/0, ticks=4852/0, in_queue=4852, util=98.92% 00:11:01.968 nvme0n2: ios=1154/0, merge=0/0, ticks=4873/0, in_queue=4873, util=99.27% 00:11:01.968 nvme0n3: ios=3215/0, merge=0/0, ticks=3658/0, in_queue=3658, util=98.50% 00:11:01.968 nvme0n4: ios=5165/0, merge=0/0, ticks=3420/0, in_queue=3420, util=99.08% 00:11:02.227 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.227 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:02.791 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.791 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:03.359 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.359 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:03.927 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.927 18:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1125323 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:04.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:04.865 nvmf hotplug test: fio failed as expected 00:11:04.865 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.435 rmmod nvme_tcp 00:11:05.435 rmmod nvme_fabrics 00:11:05.435 rmmod nvme_keyring 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1122838 ']' 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1122838 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1122838 ']' 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1122838 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1122838 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1122838' 00:11:05.435 killing process with pid 1122838 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1122838 00:11:05.435 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1122838 00:11:06.005 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:06.005 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:06.006 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:06.006 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:06.006 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:11:06.006 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:06.006 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:11:06.006 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.006 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:06.006 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.006 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.006 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.917 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:07.917 00:11:07.917 real 0m30.819s 00:11:07.917 user 1m51.122s 00:11:07.917 sys 0m7.899s 00:11:07.917 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.917 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.917 ************************************ 00:11:07.917 END TEST nvmf_fio_target 00:11:07.917 ************************************ 00:11:07.917 18:21:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:07.917 18:21:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:07.917 18:21:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.917 18:21:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:07.917 ************************************ 00:11:07.917 START TEST nvmf_bdevio 00:11:07.917 ************************************ 00:11:07.917 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:08.176 * Looking for test storage... 00:11:08.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:08.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.176 --rc genhtml_branch_coverage=1 00:11:08.176 --rc genhtml_function_coverage=1 00:11:08.176 --rc genhtml_legend=1 00:11:08.176 --rc geninfo_all_blocks=1 00:11:08.176 --rc geninfo_unexecuted_blocks=1 00:11:08.176 00:11:08.176 ' 00:11:08.176 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:08.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.176 --rc genhtml_branch_coverage=1 00:11:08.176 --rc genhtml_function_coverage=1 00:11:08.176 --rc genhtml_legend=1 00:11:08.176 --rc geninfo_all_blocks=1 00:11:08.176 --rc geninfo_unexecuted_blocks=1 00:11:08.176 00:11:08.176 ' 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:08.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.177 --rc genhtml_branch_coverage=1 00:11:08.177 --rc genhtml_function_coverage=1 00:11:08.177 --rc genhtml_legend=1 00:11:08.177 --rc geninfo_all_blocks=1 00:11:08.177 --rc geninfo_unexecuted_blocks=1 00:11:08.177 00:11:08.177 ' 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:08.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.177 --rc genhtml_branch_coverage=1 00:11:08.177 --rc genhtml_function_coverage=1 00:11:08.177 --rc genhtml_legend=1 00:11:08.177 --rc geninfo_all_blocks=1 00:11:08.177 --rc geninfo_unexecuted_blocks=1 00:11:08.177 00:11:08.177 ' 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.177 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:11.468 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:11.469 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:11.469 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:11.469 Found net devices under 0000:84:00.0: cvl_0_0 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:11.469 Found net devices under 0000:84:00.1: cvl_0_1 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:11.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:11:11.469 00:11:11.469 --- 10.0.0.2 ping statistics --- 00:11:11.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.469 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:11.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:11:11.469 00:11:11.469 --- 10.0.0.1 ping statistics --- 00:11:11.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.469 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1128442 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1128442 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1128442 ']' 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.469 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.470 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.470 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.470 [2024-10-08 18:21:39.622734] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:11:11.470 [2024-10-08 18:21:39.622827] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.470 [2024-10-08 18:21:39.733414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.470 [2024-10-08 18:21:39.863883] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.470 [2024-10-08 18:21:39.863945] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.470 [2024-10-08 18:21:39.863963] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.470 [2024-10-08 18:21:39.863977] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.470 [2024-10-08 18:21:39.863989] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.470 [2024-10-08 18:21:39.865952] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:11:11.470 [2024-10-08 18:21:39.866008] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:11:11.470 [2024-10-08 18:21:39.866033] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:11:11.470 [2024-10-08 18:21:39.866038] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.729 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.729 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.730 [2024-10-08 18:21:40.049796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.730 Malloc0 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:11.730 [2024-10-08 18:21:40.104945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:11.730 { 00:11:11.730 "params": { 00:11:11.730 "name": "Nvme$subsystem", 00:11:11.730 "trtype": "$TEST_TRANSPORT", 00:11:11.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:11.730 "adrfam": "ipv4", 00:11:11.730 "trsvcid": "$NVMF_PORT", 00:11:11.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:11.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:11.730 "hdgst": ${hdgst:-false}, 00:11:11.730 "ddgst": ${ddgst:-false} 00:11:11.730 }, 00:11:11.730 "method": "bdev_nvme_attach_controller" 00:11:11.730 } 00:11:11.730 EOF 00:11:11.730 )") 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:11:11.730 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:11.730 "params": { 00:11:11.730 "name": "Nvme1", 00:11:11.730 "trtype": "tcp", 00:11:11.730 "traddr": "10.0.0.2", 00:11:11.730 "adrfam": "ipv4", 00:11:11.730 "trsvcid": "4420", 00:11:11.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:11.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:11.730 "hdgst": false, 00:11:11.730 "ddgst": false 00:11:11.730 }, 00:11:11.730 "method": "bdev_nvme_attach_controller" 00:11:11.730 }' 00:11:11.730 [2024-10-08 18:21:40.157414] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:11:11.730 [2024-10-08 18:21:40.157497] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128481 ] 00:11:11.730 [2024-10-08 18:21:40.225624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:11.990 [2024-10-08 18:21:40.345035] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.990 [2024-10-08 18:21:40.345083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.990 [2024-10-08 18:21:40.345087] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.248 I/O targets: 00:11:12.248 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:12.248 00:11:12.248 00:11:12.248 CUnit - A unit testing framework for C - Version 2.1-3 00:11:12.249 http://cunit.sourceforge.net/ 00:11:12.249 00:11:12.249 00:11:12.249 Suite: bdevio tests on: Nvme1n1 00:11:12.249 Test: blockdev write read block ...passed 00:11:12.249 Test: blockdev write zeroes read block ...passed 00:11:12.249 Test: blockdev write zeroes read no split ...passed 00:11:12.249 Test: blockdev write zeroes read split ...passed 00:11:12.249 Test: blockdev write zeroes read split partial ...passed 00:11:12.249 Test: blockdev reset ...[2024-10-08 18:21:40.762659] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:12.249 [2024-10-08 18:21:40.762766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x612f40 (9): Bad file descriptor 00:11:12.508 [2024-10-08 18:21:40.817119] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:12.508 passed 00:11:12.508 Test: blockdev write read 8 blocks ...passed 00:11:12.508 Test: blockdev write read size > 128k ...passed 00:11:12.508 Test: blockdev write read invalid size ...passed 00:11:12.508 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.508 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.508 Test: blockdev write read max offset ...passed 00:11:12.508 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.508 Test: blockdev writev readv 8 blocks ...passed 00:11:12.508 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.508 Test: blockdev writev readv block ...passed 00:11:12.766 Test: blockdev writev readv size > 128k ...passed 00:11:12.766 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.766 Test: blockdev comparev and writev ...[2024-10-08 18:21:41.075640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.766 [2024-10-08 18:21:41.075683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:12.766 [2024-10-08 18:21:41.075709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.766 [2024-10-08 18:21:41.075727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:12.766 [2024-10-08 18:21:41.076204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.766 [2024-10-08 18:21:41.076231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:12.766 [2024-10-08 18:21:41.076254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.766 [2024-10-08 18:21:41.076270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:12.766 [2024-10-08 18:21:41.076716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.766 [2024-10-08 18:21:41.076742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:12.766 [2024-10-08 18:21:41.076765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.766 [2024-10-08 18:21:41.076782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:12.766 [2024-10-08 18:21:41.077235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.766 [2024-10-08 18:21:41.077260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:12.766 [2024-10-08 18:21:41.077290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:12.766 [2024-10-08 18:21:41.077308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:12.766 passed 00:11:12.766 Test: blockdev nvme passthru rw ...passed 00:11:12.766 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:21:41.159150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:12.766 [2024-10-08 18:21:41.159179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:12.766 [2024-10-08 18:21:41.159454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:12.766 [2024-10-08 18:21:41.159477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:12.766 [2024-10-08 18:21:41.159772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:12.766 [2024-10-08 18:21:41.159794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:12.766 [2024-10-08 18:21:41.159976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:12.766 [2024-10-08 18:21:41.159998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:12.766 passed 00:11:12.766 Test: blockdev nvme admin passthru ...passed 00:11:12.766 Test: blockdev copy ...passed 00:11:12.766 00:11:12.766 Run Summary: Type Total Ran Passed Failed Inactive 00:11:12.766 suites 1 1 n/a 0 0 00:11:12.766 tests 23 23 23 0 0 00:11:12.766 asserts 152 152 152 0 n/a 00:11:12.766 00:11:12.766 Elapsed time = 1.138 seconds 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:13.024 rmmod nvme_tcp 00:11:13.024 rmmod nvme_fabrics 00:11:13.024 rmmod nvme_keyring 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1128442 ']' 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1128442 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1128442 ']' 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1128442 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1128442 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1128442' 00:11:13.024 killing process with pid 1128442 00:11:13.024 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1128442 00:11:13.025 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1128442 00:11:13.591 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:13.591 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:13.592 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:13.592 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:13.592 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:11:13.592 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:13.592 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:11:13.592 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.592 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:13.592 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.592 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.592 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.498 18:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:15.498 00:11:15.498 real 0m7.497s 00:11:15.498 user 0m11.271s 00:11:15.498 sys 0m2.924s 00:11:15.498 18:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.498 18:21:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.498 ************************************ 00:11:15.498 END TEST nvmf_bdevio 00:11:15.498 ************************************ 00:11:15.498 18:21:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:15.498 00:11:15.498 real 4m46.724s 00:11:15.498 user 12m4.458s 00:11:15.498 sys 1m25.491s 00:11:15.498 18:21:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.498 18:21:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:15.498 ************************************ 00:11:15.498 END TEST nvmf_target_core 00:11:15.498 ************************************ 00:11:15.498 18:21:43 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:15.498 18:21:43 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:15.498 18:21:43 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.498 18:21:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:15.498 ************************************ 00:11:15.498 START TEST nvmf_target_extra 00:11:15.498 ************************************ 00:11:15.498 18:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:15.757 * Looking for test storage... 00:11:15.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:15.757 18:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:15.757 18:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:15.757 18:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:15.757 18:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:15.757 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.757 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.757 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.757 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.757 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.757 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.757 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.757 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.757 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.758 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.758 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.758 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:15.758 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:15.758 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.758 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.758 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:16.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.018 --rc genhtml_branch_coverage=1 00:11:16.018 --rc genhtml_function_coverage=1 00:11:16.018 --rc genhtml_legend=1 00:11:16.018 --rc geninfo_all_blocks=1 00:11:16.018 --rc geninfo_unexecuted_blocks=1 00:11:16.018 00:11:16.018 ' 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:16.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.018 --rc genhtml_branch_coverage=1 00:11:16.018 --rc genhtml_function_coverage=1 00:11:16.018 --rc genhtml_legend=1 00:11:16.018 --rc geninfo_all_blocks=1 00:11:16.018 --rc geninfo_unexecuted_blocks=1 00:11:16.018 00:11:16.018 ' 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:16.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.018 --rc genhtml_branch_coverage=1 00:11:16.018 --rc genhtml_function_coverage=1 00:11:16.018 --rc genhtml_legend=1 00:11:16.018 --rc geninfo_all_blocks=1 00:11:16.018 --rc geninfo_unexecuted_blocks=1 00:11:16.018 00:11:16.018 ' 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:16.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.018 --rc genhtml_branch_coverage=1 00:11:16.018 --rc genhtml_function_coverage=1 00:11:16.018 --rc genhtml_legend=1 00:11:16.018 --rc geninfo_all_blocks=1 00:11:16.018 --rc geninfo_unexecuted_blocks=1 00:11:16.018 00:11:16.018 ' 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.018 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:16.019 ************************************ 00:11:16.019 START TEST nvmf_example 00:11:16.019 ************************************ 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:16.019 * Looking for test storage... 00:11:16.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:11:16.019 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:16.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.279 --rc genhtml_branch_coverage=1 00:11:16.279 --rc genhtml_function_coverage=1 00:11:16.279 --rc genhtml_legend=1 00:11:16.279 --rc geninfo_all_blocks=1 00:11:16.279 --rc geninfo_unexecuted_blocks=1 00:11:16.279 00:11:16.279 ' 00:11:16.279 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:16.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.279 --rc genhtml_branch_coverage=1 00:11:16.279 --rc genhtml_function_coverage=1 00:11:16.279 --rc genhtml_legend=1 00:11:16.280 --rc geninfo_all_blocks=1 00:11:16.280 --rc geninfo_unexecuted_blocks=1 00:11:16.280 00:11:16.280 ' 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:16.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.280 --rc genhtml_branch_coverage=1 00:11:16.280 --rc genhtml_function_coverage=1 00:11:16.280 --rc genhtml_legend=1 00:11:16.280 --rc geninfo_all_blocks=1 00:11:16.280 --rc geninfo_unexecuted_blocks=1 00:11:16.280 00:11:16.280 ' 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:16.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.280 --rc genhtml_branch_coverage=1 00:11:16.280 --rc genhtml_function_coverage=1 00:11:16.280 --rc genhtml_legend=1 00:11:16.280 --rc geninfo_all_blocks=1 00:11:16.280 --rc geninfo_unexecuted_blocks=1 00:11:16.280 00:11:16.280 ' 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:16.280 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:19.572 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:19.573 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:19.573 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:19.573 Found net devices under 0000:84:00.0: cvl_0_0 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:19.573 Found net devices under 0000:84:00.1: cvl_0_1 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:19.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:11:19.573 00:11:19.573 --- 10.0.0.2 ping statistics --- 00:11:19.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.573 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:11:19.573 00:11:19.573 --- 10.0.0.1 ping statistics --- 00:11:19.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.573 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1130879 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1130879 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1130879 ']' 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.573 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:20.144 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:32.355 Initializing NVMe Controllers 00:11:32.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:32.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:32.355 Initialization complete. Launching workers. 00:11:32.355 ======================================================== 00:11:32.355 Latency(us) 00:11:32.355 Device Information : IOPS MiB/s Average min max 00:11:32.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14824.79 57.91 4317.73 850.00 47893.01 00:11:32.355 ======================================================== 00:11:32.355 Total : 14824.79 57.91 4317.73 850.00 47893.01 00:11:32.355 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.355 rmmod nvme_tcp 00:11:32.355 rmmod nvme_fabrics 00:11:32.355 rmmod nvme_keyring 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1130879 ']' 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1130879 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1130879 ']' 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1130879 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1130879 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1130879' 00:11:32.355 killing process with pid 1130879 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1130879 00:11:32.355 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1130879 00:11:32.355 nvmf threads initialize successfully 00:11:32.355 bdev subsystem init successfully 00:11:32.355 created a nvmf target service 00:11:32.355 create targets's poll groups done 00:11:32.355 all subsystems of target started 00:11:32.355 nvmf target is running 00:11:32.355 all subsystems of target stopped 00:11:32.355 destroy targets's poll groups done 00:11:32.355 destroyed the nvmf target service 00:11:32.355 bdev subsystem finish successfully 00:11:32.355 nvmf threads destroy successfully 00:11:32.355 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:32.355 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:32.355 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:32.355 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:32.355 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:11:32.355 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:32.355 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:11:32.355 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.355 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.355 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.355 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.355 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.924 00:11:32.924 real 0m16.958s 00:11:32.924 user 0m43.774s 00:11:32.924 sys 0m4.488s 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.924 ************************************ 00:11:32.924 END TEST nvmf_example 00:11:32.924 ************************************ 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.924 ************************************ 00:11:32.924 START TEST nvmf_filesystem 00:11:32.924 ************************************ 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:32.924 * Looking for test storage... 00:11:32.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:32.924 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:33.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.187 --rc genhtml_branch_coverage=1 00:11:33.187 --rc genhtml_function_coverage=1 00:11:33.187 --rc genhtml_legend=1 00:11:33.187 --rc geninfo_all_blocks=1 00:11:33.187 --rc geninfo_unexecuted_blocks=1 00:11:33.187 00:11:33.187 ' 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:33.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.187 --rc genhtml_branch_coverage=1 00:11:33.187 --rc genhtml_function_coverage=1 00:11:33.187 --rc genhtml_legend=1 00:11:33.187 --rc geninfo_all_blocks=1 00:11:33.187 --rc geninfo_unexecuted_blocks=1 00:11:33.187 00:11:33.187 ' 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:33.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.187 --rc genhtml_branch_coverage=1 00:11:33.187 --rc genhtml_function_coverage=1 00:11:33.187 --rc genhtml_legend=1 00:11:33.187 --rc geninfo_all_blocks=1 00:11:33.187 --rc geninfo_unexecuted_blocks=1 00:11:33.187 00:11:33.187 ' 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:33.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.187 --rc genhtml_branch_coverage=1 00:11:33.187 --rc genhtml_function_coverage=1 00:11:33.187 --rc genhtml_legend=1 00:11:33.187 --rc geninfo_all_blocks=1 00:11:33.187 --rc geninfo_unexecuted_blocks=1 00:11:33.187 00:11:33.187 ' 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:33.187 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:33.188 #define SPDK_CONFIG_H 00:11:33.188 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:33.188 #define SPDK_CONFIG_APPS 1 00:11:33.188 #define SPDK_CONFIG_ARCH native 00:11:33.188 #undef SPDK_CONFIG_ASAN 00:11:33.188 #undef SPDK_CONFIG_AVAHI 00:11:33.188 #undef SPDK_CONFIG_CET 00:11:33.188 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:33.188 #define SPDK_CONFIG_COVERAGE 1 00:11:33.188 #define SPDK_CONFIG_CROSS_PREFIX 00:11:33.188 #undef SPDK_CONFIG_CRYPTO 00:11:33.188 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:33.188 #undef SPDK_CONFIG_CUSTOMOCF 00:11:33.188 #undef SPDK_CONFIG_DAOS 00:11:33.188 #define SPDK_CONFIG_DAOS_DIR 00:11:33.188 #define SPDK_CONFIG_DEBUG 1 00:11:33.188 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:33.188 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:33.188 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:33.188 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:33.188 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:33.188 #undef SPDK_CONFIG_DPDK_UADK 00:11:33.188 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:33.188 #define SPDK_CONFIG_EXAMPLES 1 00:11:33.188 #undef SPDK_CONFIG_FC 00:11:33.188 #define SPDK_CONFIG_FC_PATH 00:11:33.188 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:33.188 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:33.188 #define SPDK_CONFIG_FSDEV 1 00:11:33.188 #undef SPDK_CONFIG_FUSE 00:11:33.188 #undef SPDK_CONFIG_FUZZER 00:11:33.188 #define SPDK_CONFIG_FUZZER_LIB 00:11:33.188 #undef SPDK_CONFIG_GOLANG 00:11:33.188 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:33.188 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:33.188 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:33.188 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:33.188 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:33.188 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:33.188 #undef SPDK_CONFIG_HAVE_LZ4 00:11:33.188 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:33.188 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:33.188 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:33.188 #define SPDK_CONFIG_IDXD 1 00:11:33.188 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:33.188 #undef SPDK_CONFIG_IPSEC_MB 00:11:33.188 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:33.188 #define SPDK_CONFIG_ISAL 1 00:11:33.188 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:33.188 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:33.188 #define SPDK_CONFIG_LIBDIR 00:11:33.188 #undef SPDK_CONFIG_LTO 00:11:33.188 #define SPDK_CONFIG_MAX_LCORES 128 00:11:33.188 #define SPDK_CONFIG_NVME_CUSE 1 00:11:33.188 #undef SPDK_CONFIG_OCF 00:11:33.188 #define SPDK_CONFIG_OCF_PATH 00:11:33.188 #define SPDK_CONFIG_OPENSSL_PATH 00:11:33.188 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:33.188 #define SPDK_CONFIG_PGO_DIR 00:11:33.188 #undef SPDK_CONFIG_PGO_USE 00:11:33.188 #define SPDK_CONFIG_PREFIX /usr/local 00:11:33.188 #undef SPDK_CONFIG_RAID5F 00:11:33.188 #undef SPDK_CONFIG_RBD 00:11:33.188 #define SPDK_CONFIG_RDMA 1 00:11:33.188 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:33.188 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:33.188 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:33.188 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:33.188 #define SPDK_CONFIG_SHARED 1 00:11:33.188 #undef SPDK_CONFIG_SMA 00:11:33.188 #define SPDK_CONFIG_TESTS 1 00:11:33.188 #undef SPDK_CONFIG_TSAN 00:11:33.188 #define SPDK_CONFIG_UBLK 1 00:11:33.188 #define SPDK_CONFIG_UBSAN 1 00:11:33.188 #undef SPDK_CONFIG_UNIT_TESTS 00:11:33.188 #undef SPDK_CONFIG_URING 00:11:33.188 #define SPDK_CONFIG_URING_PATH 00:11:33.188 #undef SPDK_CONFIG_URING_ZNS 00:11:33.188 #undef SPDK_CONFIG_USDT 00:11:33.188 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:33.188 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:33.188 #define SPDK_CONFIG_VFIO_USER 1 00:11:33.188 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:33.188 #define SPDK_CONFIG_VHOST 1 00:11:33.188 #define SPDK_CONFIG_VIRTIO 1 00:11:33.188 #undef SPDK_CONFIG_VTUNE 00:11:33.188 #define SPDK_CONFIG_VTUNE_DIR 00:11:33.188 #define SPDK_CONFIG_WERROR 1 00:11:33.188 #define SPDK_CONFIG_WPDK_DIR 00:11:33.188 #undef SPDK_CONFIG_XNVME 00:11:33.188 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:33.188 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1132576 ]] 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1132576 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:11:33.189 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.IvaIu2 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.IvaIu2/tests/target /tmp/spdk.IvaIu2 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=660762624 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4623667200 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39229308928 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=45077078016 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5847769088 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=22528507904 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=22538539008 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=8992956416 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=9015418880 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22462464 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=22538072064 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=22538539008 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=466944 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4507693056 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=4507705344 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:33.190 * Looking for test storage... 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:33.190 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=39229308928 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8062361600 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.451 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:33.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.452 --rc genhtml_branch_coverage=1 00:11:33.452 --rc genhtml_function_coverage=1 00:11:33.452 --rc genhtml_legend=1 00:11:33.452 --rc geninfo_all_blocks=1 00:11:33.452 --rc geninfo_unexecuted_blocks=1 00:11:33.452 00:11:33.452 ' 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:33.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.452 --rc genhtml_branch_coverage=1 00:11:33.452 --rc genhtml_function_coverage=1 00:11:33.452 --rc genhtml_legend=1 00:11:33.452 --rc geninfo_all_blocks=1 00:11:33.452 --rc geninfo_unexecuted_blocks=1 00:11:33.452 00:11:33.452 ' 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:33.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.452 --rc genhtml_branch_coverage=1 00:11:33.452 --rc genhtml_function_coverage=1 00:11:33.452 --rc genhtml_legend=1 00:11:33.452 --rc geninfo_all_blocks=1 00:11:33.452 --rc geninfo_unexecuted_blocks=1 00:11:33.452 00:11:33.452 ' 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:33.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.452 --rc genhtml_branch_coverage=1 00:11:33.452 --rc genhtml_function_coverage=1 00:11:33.452 --rc genhtml_legend=1 00:11:33.452 --rc geninfo_all_blocks=1 00:11:33.452 --rc geninfo_unexecuted_blocks=1 00:11:33.452 00:11:33.452 ' 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.452 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:36.843 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.843 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:36.844 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:36.844 Found net devices under 0000:84:00.0: cvl_0_0 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:36.844 Found net devices under 0000:84:00.1: cvl_0_1 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.844 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:36.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:11:36.844 00:11:36.844 --- 10.0.0.2 ping statistics --- 00:11:36.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.844 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:11:36.844 00:11:36.844 --- 10.0.0.1 ping statistics --- 00:11:36.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.844 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.844 ************************************ 00:11:36.844 START TEST nvmf_filesystem_no_in_capsule 00:11:36.844 ************************************ 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1134361 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1134361 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1134361 ']' 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.844 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.844 [2024-10-08 18:22:05.345438] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:11:36.844 [2024-10-08 18:22:05.345618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.105 [2024-10-08 18:22:05.514088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.364 [2024-10-08 18:22:05.739505] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.364 [2024-10-08 18:22:05.739622] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.364 [2024-10-08 18:22:05.739674] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.364 [2024-10-08 18:22:05.739708] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.364 [2024-10-08 18:22:05.739738] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.364 [2024-10-08 18:22:05.743397] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.364 [2024-10-08 18:22:05.743500] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.364 [2024-10-08 18:22:05.743591] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.364 [2024-10-08 18:22:05.743594] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.364 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:37.364 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:37.364 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:37.364 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:37.364 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.622 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.622 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:37.622 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:37.622 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.622 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.622 [2024-10-08 18:22:05.919157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.622 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.622 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:37.622 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.622 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.622 Malloc1 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.622 [2024-10-08 18:22:06.099093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.622 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:37.622 { 00:11:37.622 "name": "Malloc1", 00:11:37.622 "aliases": [ 00:11:37.622 "a2905d89-7b59-46bb-8e63-b033ea40b4fb" 00:11:37.622 ], 00:11:37.622 "product_name": "Malloc disk", 00:11:37.622 "block_size": 512, 00:11:37.623 "num_blocks": 1048576, 00:11:37.623 "uuid": "a2905d89-7b59-46bb-8e63-b033ea40b4fb", 00:11:37.623 "assigned_rate_limits": { 00:11:37.623 "rw_ios_per_sec": 0, 00:11:37.623 "rw_mbytes_per_sec": 0, 00:11:37.623 "r_mbytes_per_sec": 0, 00:11:37.623 "w_mbytes_per_sec": 0 00:11:37.623 }, 00:11:37.623 "claimed": true, 00:11:37.623 "claim_type": "exclusive_write", 00:11:37.623 "zoned": false, 00:11:37.623 "supported_io_types": { 00:11:37.623 "read": true, 00:11:37.623 "write": true, 00:11:37.623 "unmap": true, 00:11:37.623 "flush": true, 00:11:37.623 "reset": true, 00:11:37.623 "nvme_admin": false, 00:11:37.623 "nvme_io": false, 00:11:37.623 "nvme_io_md": false, 00:11:37.623 "write_zeroes": true, 00:11:37.623 "zcopy": true, 00:11:37.623 "get_zone_info": false, 00:11:37.623 "zone_management": false, 00:11:37.623 "zone_append": false, 00:11:37.623 "compare": false, 00:11:37.623 "compare_and_write": false, 00:11:37.623 "abort": true, 00:11:37.623 "seek_hole": false, 00:11:37.623 "seek_data": false, 00:11:37.623 "copy": true, 00:11:37.623 "nvme_iov_md": false 00:11:37.623 }, 00:11:37.623 "memory_domains": [ 00:11:37.623 { 00:11:37.623 "dma_device_id": "system", 00:11:37.623 "dma_device_type": 1 00:11:37.623 }, 00:11:37.623 { 00:11:37.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.623 "dma_device_type": 2 00:11:37.623 } 00:11:37.623 ], 00:11:37.623 "driver_specific": {} 00:11:37.623 } 00:11:37.623 ]' 00:11:37.623 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:37.882 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:37.882 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:37.882 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:37.882 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:37.882 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:37.882 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:37.882 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:38.452 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:38.452 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:38.452 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.452 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:38.452 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:40.357 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:40.357 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:40.357 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.357 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:40.357 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.357 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:40.357 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:40.357 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:40.358 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:40.358 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:40.358 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:40.358 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:40.358 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:40.358 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:40.358 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:40.358 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:40.358 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:40.616 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:41.184 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.564 ************************************ 00:11:42.564 START TEST filesystem_ext4 00:11:42.564 ************************************ 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:42.564 mke2fs 1.47.0 (5-Feb-2023) 00:11:42.564 Discarding device blocks: 0/522240 done 00:11:42.564 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:42.564 Filesystem UUID: 87578ac3-dc89-402d-ba85-e1df32a71a07 00:11:42.564 Superblock backups stored on blocks: 00:11:42.564 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:42.564 00:11:42.564 Allocating group tables: 0/64 done 00:11:42.564 Writing inode tables: 0/64 done 00:11:42.564 Creating journal (8192 blocks): done 00:11:42.564 Writing superblocks and filesystem accounting information: 0/64 done 00:11:42.564 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:42.564 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1134361 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.884 00:11:47.884 real 0m5.607s 00:11:47.884 user 0m0.012s 00:11:47.884 sys 0m0.072s 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:47.884 ************************************ 00:11:47.884 END TEST filesystem_ext4 00:11:47.884 ************************************ 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:47.884 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.885 ************************************ 00:11:47.885 START TEST filesystem_btrfs 00:11:47.885 ************************************ 00:11:47.885 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:47.885 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:47.885 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:47.885 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:47.885 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:47.885 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:47.885 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:47.885 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:47.885 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:47.885 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:47.885 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:48.144 btrfs-progs v6.8.1 00:11:48.144 See https://btrfs.readthedocs.io for more information. 00:11:48.144 00:11:48.144 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:48.144 NOTE: several default settings have changed in version 5.15, please make sure 00:11:48.144 this does not affect your deployments: 00:11:48.144 - DUP for metadata (-m dup) 00:11:48.144 - enabled no-holes (-O no-holes) 00:11:48.144 - enabled free-space-tree (-R free-space-tree) 00:11:48.144 00:11:48.144 Label: (null) 00:11:48.144 UUID: c926ac15-f919-4063-8caf-318fa0e557f9 00:11:48.144 Node size: 16384 00:11:48.144 Sector size: 4096 (CPU page size: 4096) 00:11:48.144 Filesystem size: 510.00MiB 00:11:48.144 Block group profiles: 00:11:48.144 Data: single 8.00MiB 00:11:48.144 Metadata: DUP 32.00MiB 00:11:48.144 System: DUP 8.00MiB 00:11:48.144 SSD detected: yes 00:11:48.144 Zoned device: no 00:11:48.144 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:48.144 Checksum: crc32c 00:11:48.144 Number of devices: 1 00:11:48.144 Devices: 00:11:48.144 ID SIZE PATH 00:11:48.144 1 510.00MiB /dev/nvme0n1p1 00:11:48.144 00:11:48.144 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:48.144 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.403 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.403 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:48.403 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.403 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:48.403 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:48.403 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.663 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1134361 00:11:48.663 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.663 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.663 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.663 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.663 00:11:48.663 real 0m0.625s 00:11:48.663 user 0m0.019s 00:11:48.663 sys 0m0.101s 00:11:48.663 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.663 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:48.663 ************************************ 00:11:48.663 END TEST filesystem_btrfs 00:11:48.663 ************************************ 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.663 ************************************ 00:11:48.663 START TEST filesystem_xfs 00:11:48.663 ************************************ 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:48.663 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:48.663 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:48.663 = sectsz=512 attr=2, projid32bit=1 00:11:48.663 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:48.663 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:48.663 data = bsize=4096 blocks=130560, imaxpct=25 00:11:48.663 = sunit=0 swidth=0 blks 00:11:48.663 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:48.663 log =internal log bsize=4096 blocks=16384, version=2 00:11:48.663 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:48.663 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:49.599 Discarding blocks...Done. 00:11:49.599 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:49.599 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1134361 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:51.504 00:11:51.504 real 0m2.797s 00:11:51.504 user 0m0.018s 00:11:51.504 sys 0m0.061s 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:51.504 ************************************ 00:11:51.504 END TEST filesystem_xfs 00:11:51.504 ************************************ 00:11:51.504 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:51.764 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:51.764 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.764 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.764 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:51.764 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:51.764 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.764 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:51.764 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.765 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:51.765 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.765 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.765 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.765 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.765 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:51.765 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1134361 00:11:51.765 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1134361 ']' 00:11:51.765 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1134361 00:11:51.765 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:51.765 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:51.765 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1134361 00:11:52.024 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:52.024 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:52.024 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1134361' 00:11:52.024 killing process with pid 1134361 00:11:52.024 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1134361 00:11:52.024 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1134361 00:11:52.595 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:52.595 00:11:52.595 real 0m15.729s 00:11:52.595 user 0m59.515s 00:11:52.595 sys 0m2.311s 00:11:52.595 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.595 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.595 ************************************ 00:11:52.595 END TEST nvmf_filesystem_no_in_capsule 00:11:52.595 ************************************ 00:11:52.595 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:52.595 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:52.595 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.595 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:52.595 ************************************ 00:11:52.595 START TEST nvmf_filesystem_in_capsule 00:11:52.595 ************************************ 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1136378 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1136378 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1136378 ']' 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:52.595 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.595 [2024-10-08 18:22:21.119811] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:11:52.595 [2024-10-08 18:22:21.119995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.856 [2024-10-08 18:22:21.257042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.114 [2024-10-08 18:22:21.482493] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.114 [2024-10-08 18:22:21.482614] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.114 [2024-10-08 18:22:21.482665] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.114 [2024-10-08 18:22:21.482700] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.114 [2024-10-08 18:22:21.482727] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.114 [2024-10-08 18:22:21.486452] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.114 [2024-10-08 18:22:21.486553] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.115 [2024-10-08 18:22:21.486644] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.115 [2024-10-08 18:22:21.486648] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.115 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:53.115 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:53.115 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:53.115 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:53.115 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.373 [2024-10-08 18:22:21.668466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.373 Malloc1 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.373 [2024-10-08 18:22:21.842188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:53.373 { 00:11:53.373 "name": "Malloc1", 00:11:53.373 "aliases": [ 00:11:53.373 "106068e8-2bfd-40fe-9c5c-47ea3b35cb60" 00:11:53.373 ], 00:11:53.373 "product_name": "Malloc disk", 00:11:53.373 "block_size": 512, 00:11:53.373 "num_blocks": 1048576, 00:11:53.373 "uuid": "106068e8-2bfd-40fe-9c5c-47ea3b35cb60", 00:11:53.373 "assigned_rate_limits": { 00:11:53.373 "rw_ios_per_sec": 0, 00:11:53.373 "rw_mbytes_per_sec": 0, 00:11:53.373 "r_mbytes_per_sec": 0, 00:11:53.373 "w_mbytes_per_sec": 0 00:11:53.373 }, 00:11:53.373 "claimed": true, 00:11:53.373 "claim_type": "exclusive_write", 00:11:53.373 "zoned": false, 00:11:53.373 "supported_io_types": { 00:11:53.373 "read": true, 00:11:53.373 "write": true, 00:11:53.373 "unmap": true, 00:11:53.373 "flush": true, 00:11:53.373 "reset": true, 00:11:53.373 "nvme_admin": false, 00:11:53.373 "nvme_io": false, 00:11:53.373 "nvme_io_md": false, 00:11:53.373 "write_zeroes": true, 00:11:53.373 "zcopy": true, 00:11:53.373 "get_zone_info": false, 00:11:53.373 "zone_management": false, 00:11:53.373 "zone_append": false, 00:11:53.373 "compare": false, 00:11:53.373 "compare_and_write": false, 00:11:53.373 "abort": true, 00:11:53.373 "seek_hole": false, 00:11:53.373 "seek_data": false, 00:11:53.373 "copy": true, 00:11:53.373 "nvme_iov_md": false 00:11:53.373 }, 00:11:53.373 "memory_domains": [ 00:11:53.373 { 00:11:53.373 "dma_device_id": "system", 00:11:53.373 "dma_device_type": 1 00:11:53.373 }, 00:11:53.373 { 00:11:53.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.373 "dma_device_type": 2 00:11:53.373 } 00:11:53.373 ], 00:11:53.373 "driver_specific": {} 00:11:53.373 } 00:11:53.373 ]' 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:53.373 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:53.632 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:53.632 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:53.632 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:53.632 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:53.632 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.202 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.202 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.202 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.202 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:54.202 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:56.112 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:56.112 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:56.112 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.112 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:56.112 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.112 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:56.112 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:56.112 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:56.373 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:56.373 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:56.373 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:56.373 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:56.373 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:56.373 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:56.373 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:56.373 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:56.373 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:56.373 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:57.308 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.244 ************************************ 00:11:58.244 START TEST filesystem_in_capsule_ext4 00:11:58.244 ************************************ 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:58.244 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:58.244 mke2fs 1.47.0 (5-Feb-2023) 00:11:58.244 Discarding device blocks: 0/522240 done 00:11:58.244 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:58.244 Filesystem UUID: 8e595582-2f17-4d96-b4aa-3dc8aa9c8d82 00:11:58.244 Superblock backups stored on blocks: 00:11:58.244 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:58.244 00:11:58.244 Allocating group tables: 0/64 done 00:11:58.244 Writing inode tables: 0/64 done 00:11:58.504 Creating journal (8192 blocks): done 00:11:59.959 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:11:59.959 00:11:59.959 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:59.959 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.520 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.520 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:06.520 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.520 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:06.520 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:06.520 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.520 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1136378 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.521 00:12:06.521 real 0m7.749s 00:12:06.521 user 0m0.015s 00:12:06.521 sys 0m0.066s 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:06.521 ************************************ 00:12:06.521 END TEST filesystem_in_capsule_ext4 00:12:06.521 ************************************ 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.521 ************************************ 00:12:06.521 START TEST filesystem_in_capsule_btrfs 00:12:06.521 ************************************ 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:06.521 btrfs-progs v6.8.1 00:12:06.521 See https://btrfs.readthedocs.io for more information. 00:12:06.521 00:12:06.521 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:06.521 NOTE: several default settings have changed in version 5.15, please make sure 00:12:06.521 this does not affect your deployments: 00:12:06.521 - DUP for metadata (-m dup) 00:12:06.521 - enabled no-holes (-O no-holes) 00:12:06.521 - enabled free-space-tree (-R free-space-tree) 00:12:06.521 00:12:06.521 Label: (null) 00:12:06.521 UUID: e846ccab-fca9-415d-837d-3187c462e301 00:12:06.521 Node size: 16384 00:12:06.521 Sector size: 4096 (CPU page size: 4096) 00:12:06.521 Filesystem size: 510.00MiB 00:12:06.521 Block group profiles: 00:12:06.521 Data: single 8.00MiB 00:12:06.521 Metadata: DUP 32.00MiB 00:12:06.521 System: DUP 8.00MiB 00:12:06.521 SSD detected: yes 00:12:06.521 Zoned device: no 00:12:06.521 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:06.521 Checksum: crc32c 00:12:06.521 Number of devices: 1 00:12:06.521 Devices: 00:12:06.521 ID SIZE PATH 00:12:06.521 1 510.00MiB /dev/nvme0n1p1 00:12:06.521 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1136378 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.521 00:12:06.521 real 0m0.600s 00:12:06.521 user 0m0.017s 00:12:06.521 sys 0m0.117s 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.521 ************************************ 00:12:06.521 END TEST filesystem_in_capsule_btrfs 00:12:06.521 ************************************ 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:06.521 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.521 ************************************ 00:12:06.521 START TEST filesystem_in_capsule_xfs 00:12:06.521 ************************************ 00:12:06.521 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:06.521 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:06.521 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.521 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:06.521 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:06.521 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:06.521 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:06.521 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:06.521 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:06.521 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:06.521 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:06.780 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:06.780 = sectsz=512 attr=2, projid32bit=1 00:12:06.780 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:06.780 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:06.780 data = bsize=4096 blocks=130560, imaxpct=25 00:12:06.780 = sunit=0 swidth=0 blks 00:12:06.780 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:06.780 log =internal log bsize=4096 blocks=16384, version=2 00:12:06.780 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:06.780 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:07.715 Discarding blocks...Done. 00:12:07.715 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:07.715 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1136378 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:10.246 00:12:10.246 real 0m3.423s 00:12:10.246 user 0m0.022s 00:12:10.246 sys 0m0.050s 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:10.246 ************************************ 00:12:10.246 END TEST filesystem_in_capsule_xfs 00:12:10.246 ************************************ 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1136378 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1136378 ']' 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1136378 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.246 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1136378 00:12:10.505 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:10.505 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:10.505 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1136378' 00:12:10.505 killing process with pid 1136378 00:12:10.505 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1136378 00:12:10.505 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1136378 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:11.071 00:12:11.071 real 0m18.377s 00:12:11.071 user 1m10.136s 00:12:11.071 sys 0m2.321s 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.071 ************************************ 00:12:11.071 END TEST nvmf_filesystem_in_capsule 00:12:11.071 ************************************ 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:11.071 rmmod nvme_tcp 00:12:11.071 rmmod nvme_fabrics 00:12:11.071 rmmod nvme_keyring 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.071 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:13.608 00:12:13.608 real 0m40.195s 00:12:13.608 user 2m11.135s 00:12:13.608 sys 0m7.266s 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.608 ************************************ 00:12:13.608 END TEST nvmf_filesystem 00:12:13.608 ************************************ 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:13.608 ************************************ 00:12:13.608 START TEST nvmf_target_discovery 00:12:13.608 ************************************ 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:13.608 * Looking for test storage... 00:12:13.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:13.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.608 --rc genhtml_branch_coverage=1 00:12:13.608 --rc genhtml_function_coverage=1 00:12:13.608 --rc genhtml_legend=1 00:12:13.608 --rc geninfo_all_blocks=1 00:12:13.608 --rc geninfo_unexecuted_blocks=1 00:12:13.608 00:12:13.608 ' 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:13.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.608 --rc genhtml_branch_coverage=1 00:12:13.608 --rc genhtml_function_coverage=1 00:12:13.608 --rc genhtml_legend=1 00:12:13.608 --rc geninfo_all_blocks=1 00:12:13.608 --rc geninfo_unexecuted_blocks=1 00:12:13.608 00:12:13.608 ' 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:13.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.608 --rc genhtml_branch_coverage=1 00:12:13.608 --rc genhtml_function_coverage=1 00:12:13.608 --rc genhtml_legend=1 00:12:13.608 --rc geninfo_all_blocks=1 00:12:13.608 --rc geninfo_unexecuted_blocks=1 00:12:13.608 00:12:13.608 ' 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:13.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.608 --rc genhtml_branch_coverage=1 00:12:13.608 --rc genhtml_function_coverage=1 00:12:13.608 --rc genhtml_legend=1 00:12:13.608 --rc geninfo_all_blocks=1 00:12:13.608 --rc geninfo_unexecuted_blocks=1 00:12:13.608 00:12:13.608 ' 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.608 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:13.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:13.609 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:16.899 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.899 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:16.900 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:16.900 Found net devices under 0000:84:00.0: cvl_0_0 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:16.900 Found net devices under 0000:84:00.1: cvl_0_1 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:16.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:12:16.900 00:12:16.900 --- 10.0.0.2 ping statistics --- 00:12:16.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.900 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:12:16.900 00:12:16.900 --- 10.0.0.1 ping statistics --- 00:12:16.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.900 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1140755 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1140755 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1140755 ']' 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.900 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.900 [2024-10-08 18:22:45.015168] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:12:16.900 [2024-10-08 18:22:45.015254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.900 [2024-10-08 18:22:45.127574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.900 [2024-10-08 18:22:45.347808] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.900 [2024-10-08 18:22:45.347932] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.900 [2024-10-08 18:22:45.347969] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.900 [2024-10-08 18:22:45.347999] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.900 [2024-10-08 18:22:45.348027] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.900 [2024-10-08 18:22:45.351812] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.900 [2024-10-08 18:22:45.351913] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.900 [2024-10-08 18:22:45.352006] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.900 [2024-10-08 18:22:45.352009] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.196 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.197 [2024-10-08 18:22:45.634521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.197 Null1 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.197 [2024-10-08 18:22:45.674856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.197 Null2 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.197 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.480 Null3 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.480 Null4 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.480 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:12:17.480 00:12:17.480 Discovery Log Number of Records 6, Generation counter 6 00:12:17.480 =====Discovery Log Entry 0====== 00:12:17.480 trtype: tcp 00:12:17.480 adrfam: ipv4 00:12:17.480 subtype: current discovery subsystem 00:12:17.480 treq: not required 00:12:17.480 portid: 0 00:12:17.480 trsvcid: 4420 00:12:17.480 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:17.480 traddr: 10.0.0.2 00:12:17.480 eflags: explicit discovery connections, duplicate discovery information 00:12:17.480 sectype: none 00:12:17.480 =====Discovery Log Entry 1====== 00:12:17.480 trtype: tcp 00:12:17.480 adrfam: ipv4 00:12:17.480 subtype: nvme subsystem 00:12:17.480 treq: not required 00:12:17.480 portid: 0 00:12:17.480 trsvcid: 4420 00:12:17.480 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:17.480 traddr: 10.0.0.2 00:12:17.480 eflags: none 00:12:17.480 sectype: none 00:12:17.481 =====Discovery Log Entry 2====== 00:12:17.481 trtype: tcp 00:12:17.481 adrfam: ipv4 00:12:17.481 subtype: nvme subsystem 00:12:17.481 treq: not required 00:12:17.481 portid: 0 00:12:17.481 trsvcid: 4420 00:12:17.481 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:17.481 traddr: 10.0.0.2 00:12:17.481 eflags: none 00:12:17.481 sectype: none 00:12:17.481 =====Discovery Log Entry 3====== 00:12:17.481 trtype: tcp 00:12:17.481 adrfam: ipv4 00:12:17.481 subtype: nvme subsystem 00:12:17.481 treq: not required 00:12:17.481 portid: 0 00:12:17.481 trsvcid: 4420 00:12:17.481 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:17.481 traddr: 10.0.0.2 00:12:17.481 eflags: none 00:12:17.481 sectype: none 00:12:17.481 =====Discovery Log Entry 4====== 00:12:17.481 trtype: tcp 00:12:17.481 adrfam: ipv4 00:12:17.481 subtype: nvme subsystem 00:12:17.481 treq: not required 00:12:17.481 portid: 0 00:12:17.481 trsvcid: 4420 00:12:17.481 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:17.481 traddr: 10.0.0.2 00:12:17.481 eflags: none 00:12:17.481 sectype: none 00:12:17.481 =====Discovery Log Entry 5====== 00:12:17.481 trtype: tcp 00:12:17.481 adrfam: ipv4 00:12:17.481 subtype: discovery subsystem referral 00:12:17.481 treq: not required 00:12:17.481 portid: 0 00:12:17.481 trsvcid: 4430 00:12:17.481 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:17.481 traddr: 10.0.0.2 00:12:17.481 eflags: none 00:12:17.481 sectype: none 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:17.481 Perform nvmf subsystem discovery via RPC 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.481 [ 00:12:17.481 { 00:12:17.481 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:17.481 "subtype": "Discovery", 00:12:17.481 "listen_addresses": [ 00:12:17.481 { 00:12:17.481 "trtype": "TCP", 00:12:17.481 "adrfam": "IPv4", 00:12:17.481 "traddr": "10.0.0.2", 00:12:17.481 "trsvcid": "4420" 00:12:17.481 } 00:12:17.481 ], 00:12:17.481 "allow_any_host": true, 00:12:17.481 "hosts": [] 00:12:17.481 }, 00:12:17.481 { 00:12:17.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:17.481 "subtype": "NVMe", 00:12:17.481 "listen_addresses": [ 00:12:17.481 { 00:12:17.481 "trtype": "TCP", 00:12:17.481 "adrfam": "IPv4", 00:12:17.481 "traddr": "10.0.0.2", 00:12:17.481 "trsvcid": "4420" 00:12:17.481 } 00:12:17.481 ], 00:12:17.481 "allow_any_host": true, 00:12:17.481 "hosts": [], 00:12:17.481 "serial_number": "SPDK00000000000001", 00:12:17.481 "model_number": "SPDK bdev Controller", 00:12:17.481 "max_namespaces": 32, 00:12:17.481 "min_cntlid": 1, 00:12:17.481 "max_cntlid": 65519, 00:12:17.481 "namespaces": [ 00:12:17.481 { 00:12:17.481 "nsid": 1, 00:12:17.481 "bdev_name": "Null1", 00:12:17.481 "name": "Null1", 00:12:17.481 "nguid": "412BC09F6CDD47B0B713B4D97C370DA6", 00:12:17.481 "uuid": "412bc09f-6cdd-47b0-b713-b4d97c370da6" 00:12:17.481 } 00:12:17.481 ] 00:12:17.481 }, 00:12:17.481 { 00:12:17.481 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:17.481 "subtype": "NVMe", 00:12:17.481 "listen_addresses": [ 00:12:17.481 { 00:12:17.481 "trtype": "TCP", 00:12:17.481 "adrfam": "IPv4", 00:12:17.481 "traddr": "10.0.0.2", 00:12:17.481 "trsvcid": "4420" 00:12:17.481 } 00:12:17.481 ], 00:12:17.481 "allow_any_host": true, 00:12:17.481 "hosts": [], 00:12:17.481 "serial_number": "SPDK00000000000002", 00:12:17.481 "model_number": "SPDK bdev Controller", 00:12:17.481 "max_namespaces": 32, 00:12:17.481 "min_cntlid": 1, 00:12:17.481 "max_cntlid": 65519, 00:12:17.481 "namespaces": [ 00:12:17.481 { 00:12:17.481 "nsid": 1, 00:12:17.481 "bdev_name": "Null2", 00:12:17.481 "name": "Null2", 00:12:17.481 "nguid": "BB6B60B62CC74527B24D525840DEAEAC", 00:12:17.481 "uuid": "bb6b60b6-2cc7-4527-b24d-525840deaeac" 00:12:17.481 } 00:12:17.481 ] 00:12:17.481 }, 00:12:17.481 { 00:12:17.481 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:17.481 "subtype": "NVMe", 00:12:17.481 "listen_addresses": [ 00:12:17.481 { 00:12:17.481 "trtype": "TCP", 00:12:17.481 "adrfam": "IPv4", 00:12:17.481 "traddr": "10.0.0.2", 00:12:17.481 "trsvcid": "4420" 00:12:17.481 } 00:12:17.481 ], 00:12:17.481 "allow_any_host": true, 00:12:17.481 "hosts": [], 00:12:17.481 "serial_number": "SPDK00000000000003", 00:12:17.481 "model_number": "SPDK bdev Controller", 00:12:17.481 "max_namespaces": 32, 00:12:17.481 "min_cntlid": 1, 00:12:17.481 "max_cntlid": 65519, 00:12:17.481 "namespaces": [ 00:12:17.481 { 00:12:17.481 "nsid": 1, 00:12:17.481 "bdev_name": "Null3", 00:12:17.481 "name": "Null3", 00:12:17.481 "nguid": "0ACD77992BE848699E56F84572C3C46F", 00:12:17.481 "uuid": "0acd7799-2be8-4869-9e56-f84572c3c46f" 00:12:17.481 } 00:12:17.481 ] 00:12:17.481 }, 00:12:17.481 { 00:12:17.481 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:17.481 "subtype": "NVMe", 00:12:17.481 "listen_addresses": [ 00:12:17.481 { 00:12:17.481 "trtype": "TCP", 00:12:17.481 "adrfam": "IPv4", 00:12:17.481 "traddr": "10.0.0.2", 00:12:17.481 "trsvcid": "4420" 00:12:17.481 } 00:12:17.481 ], 00:12:17.481 "allow_any_host": true, 00:12:17.481 "hosts": [], 00:12:17.481 "serial_number": "SPDK00000000000004", 00:12:17.481 "model_number": "SPDK bdev Controller", 00:12:17.481 "max_namespaces": 32, 00:12:17.481 "min_cntlid": 1, 00:12:17.481 "max_cntlid": 65519, 00:12:17.481 "namespaces": [ 00:12:17.481 { 00:12:17.481 "nsid": 1, 00:12:17.481 "bdev_name": "Null4", 00:12:17.481 "name": "Null4", 00:12:17.481 "nguid": "AD81BBE141F142EF956A712D297F3FEA", 00:12:17.481 "uuid": "ad81bbe1-41f1-42ef-956a-712d297f3fea" 00:12:17.481 } 00:12:17.481 ] 00:12:17.481 } 00:12:17.481 ] 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.481 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.481 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.481 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:17.481 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:17.481 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.481 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.481 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.481 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:17.481 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.481 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:17.740 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.741 rmmod nvme_tcp 00:12:17.741 rmmod nvme_fabrics 00:12:17.741 rmmod nvme_keyring 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1140755 ']' 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1140755 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1140755 ']' 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1140755 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1140755 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1140755' 00:12:17.741 killing process with pid 1140755 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1140755 00:12:17.741 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1140755 00:12:18.309 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:18.309 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:18.309 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:18.309 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:18.309 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:12:18.309 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:18.309 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:12:18.309 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:18.309 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:18.309 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.309 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.309 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.215 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:20.215 00:12:20.215 real 0m6.988s 00:12:20.215 user 0m5.914s 00:12:20.215 sys 0m2.732s 00:12:20.215 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.215 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.215 ************************************ 00:12:20.215 END TEST nvmf_target_discovery 00:12:20.215 ************************************ 00:12:20.215 18:22:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:20.215 18:22:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:20.215 18:22:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.215 18:22:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.215 ************************************ 00:12:20.215 START TEST nvmf_referrals 00:12:20.215 ************************************ 00:12:20.215 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:20.474 * Looking for test storage... 00:12:20.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.474 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:20.474 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.474 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.474 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.474 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:20.474 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.474 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:20.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.474 --rc genhtml_branch_coverage=1 00:12:20.474 --rc genhtml_function_coverage=1 00:12:20.474 --rc genhtml_legend=1 00:12:20.474 --rc geninfo_all_blocks=1 00:12:20.474 --rc geninfo_unexecuted_blocks=1 00:12:20.474 00:12:20.474 ' 00:12:20.474 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:20.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.474 --rc genhtml_branch_coverage=1 00:12:20.474 --rc genhtml_function_coverage=1 00:12:20.474 --rc genhtml_legend=1 00:12:20.474 --rc geninfo_all_blocks=1 00:12:20.474 --rc geninfo_unexecuted_blocks=1 00:12:20.474 00:12:20.474 ' 00:12:20.474 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:20.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.475 --rc genhtml_branch_coverage=1 00:12:20.475 --rc genhtml_function_coverage=1 00:12:20.475 --rc genhtml_legend=1 00:12:20.475 --rc geninfo_all_blocks=1 00:12:20.475 --rc geninfo_unexecuted_blocks=1 00:12:20.475 00:12:20.475 ' 00:12:20.475 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:20.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.475 --rc genhtml_branch_coverage=1 00:12:20.475 --rc genhtml_function_coverage=1 00:12:20.475 --rc genhtml_legend=1 00:12:20.475 --rc geninfo_all_blocks=1 00:12:20.475 --rc geninfo_unexecuted_blocks=1 00:12:20.475 00:12:20.475 ' 00:12:20.475 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.475 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:20.475 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.475 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.475 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.475 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.475 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.475 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.475 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.475 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.475 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.732 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.732 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:20.732 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:20.732 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.732 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.732 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.732 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.732 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.732 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.732 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.733 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:23.296 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:23.296 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:23.296 Found net devices under 0000:84:00.0: cvl_0_0 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:23.296 Found net devices under 0000:84:00.1: cvl_0_1 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:23.296 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:23.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:12:23.556 00:12:23.556 --- 10.0.0.2 ping statistics --- 00:12:23.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.556 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:12:23.556 00:12:23.556 --- 10.0.0.1 ping statistics --- 00:12:23.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.556 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:23.556 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:23.557 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:23.557 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:23.557 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.557 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1143001 00:12:23.557 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.557 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1143001 00:12:23.557 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1143001 ']' 00:12:23.557 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.557 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:23.557 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.557 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:23.557 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.557 [2024-10-08 18:22:51.990756] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:12:23.557 [2024-10-08 18:22:51.990864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.816 [2024-10-08 18:22:52.104567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.816 [2024-10-08 18:22:52.314473] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.816 [2024-10-08 18:22:52.314597] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.816 [2024-10-08 18:22:52.314633] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.816 [2024-10-08 18:22:52.314682] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.816 [2024-10-08 18:22:52.314712] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.816 [2024-10-08 18:22:52.318803] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.816 [2024-10-08 18:22:52.318871] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.816 [2024-10-08 18:22:52.318969] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.816 [2024-10-08 18:22:52.318972] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.752 [2024-10-08 18:22:53.147426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.752 [2024-10-08 18:22:53.159635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.752 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.753 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:24.753 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.012 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.271 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.271 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:25.271 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.271 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.271 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.271 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:25.271 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:25.271 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.271 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.271 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.271 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.271 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.531 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.790 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:25.790 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:25.790 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:25.790 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:25.790 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:25.790 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.790 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:25.790 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:25.790 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:25.790 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:25.790 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:25.790 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.790 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.049 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.050 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.050 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.050 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.309 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:26.309 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:26.309 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:26.309 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:26.309 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:26.309 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.309 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:26.568 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:26.568 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:26.568 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:26.568 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:26.568 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.568 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:26.568 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:26.568 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:26.568 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.568 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.568 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.568 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.568 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:26.568 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.568 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.568 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.828 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.828 rmmod nvme_tcp 00:12:26.828 rmmod nvme_fabrics 00:12:26.828 rmmod nvme_keyring 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1143001 ']' 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1143001 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1143001 ']' 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1143001 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1143001 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1143001' 00:12:27.088 killing process with pid 1143001 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1143001 00:12:27.088 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1143001 00:12:27.346 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:27.346 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:27.346 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:27.346 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:27.346 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:12:27.346 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:27.346 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:12:27.346 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:27.346 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:27.346 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.346 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.346 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.912 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:29.912 00:12:29.912 real 0m9.182s 00:12:29.912 user 0m15.809s 00:12:29.912 sys 0m3.190s 00:12:29.912 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.912 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.912 ************************************ 00:12:29.912 END TEST nvmf_referrals 00:12:29.912 ************************************ 00:12:29.912 18:22:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:29.912 18:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.912 18:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.912 18:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.912 ************************************ 00:12:29.912 START TEST nvmf_connect_disconnect 00:12:29.912 ************************************ 00:12:29.912 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:29.912 * Looking for test storage... 00:12:29.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:29.912 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:29.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.913 --rc genhtml_branch_coverage=1 00:12:29.913 --rc genhtml_function_coverage=1 00:12:29.913 --rc genhtml_legend=1 00:12:29.913 --rc geninfo_all_blocks=1 00:12:29.913 --rc geninfo_unexecuted_blocks=1 00:12:29.913 00:12:29.913 ' 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:29.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.913 --rc genhtml_branch_coverage=1 00:12:29.913 --rc genhtml_function_coverage=1 00:12:29.913 --rc genhtml_legend=1 00:12:29.913 --rc geninfo_all_blocks=1 00:12:29.913 --rc geninfo_unexecuted_blocks=1 00:12:29.913 00:12:29.913 ' 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:29.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.913 --rc genhtml_branch_coverage=1 00:12:29.913 --rc genhtml_function_coverage=1 00:12:29.913 --rc genhtml_legend=1 00:12:29.913 --rc geninfo_all_blocks=1 00:12:29.913 --rc geninfo_unexecuted_blocks=1 00:12:29.913 00:12:29.913 ' 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:29.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.913 --rc genhtml_branch_coverage=1 00:12:29.913 --rc genhtml_function_coverage=1 00:12:29.913 --rc genhtml_legend=1 00:12:29.913 --rc geninfo_all_blocks=1 00:12:29.913 --rc geninfo_unexecuted_blocks=1 00:12:29.913 00:12:29.913 ' 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.913 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:29.914 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:33.203 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.203 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:33.204 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:33.204 Found net devices under 0000:84:00.0: cvl_0_0 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:33.204 Found net devices under 0000:84:00.1: cvl_0_1 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:33.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:12:33.204 00:12:33.204 --- 10.0.0.2 ping statistics --- 00:12:33.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.204 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:12:33.204 00:12:33.204 --- 10.0.0.1 ping statistics --- 00:12:33.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.204 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1145584 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1145584 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1145584 ']' 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.204 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.204 [2024-10-08 18:23:01.637500] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:12:33.204 [2024-10-08 18:23:01.637686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.463 [2024-10-08 18:23:01.800341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.721 [2024-10-08 18:23:02.020337] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.721 [2024-10-08 18:23:02.020448] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.721 [2024-10-08 18:23:02.020499] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.721 [2024-10-08 18:23:02.020532] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.721 [2024-10-08 18:23:02.020560] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.721 [2024-10-08 18:23:02.024275] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.721 [2024-10-08 18:23:02.024380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.721 [2024-10-08 18:23:02.024472] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.721 [2024-10-08 18:23:02.024476] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.288 [2024-10-08 18:23:02.700405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.288 [2024-10-08 18:23:02.762594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:34.288 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:37.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.478 rmmod nvme_tcp 00:12:48.478 rmmod nvme_fabrics 00:12:48.478 rmmod nvme_keyring 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1145584 ']' 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1145584 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1145584 ']' 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1145584 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1145584 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1145584' 00:12:48.478 killing process with pid 1145584 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1145584 00:12:48.478 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1145584 00:12:49.140 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:49.140 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:49.140 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:49.140 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:49.140 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:12:49.140 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:49.140 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:12:49.140 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.140 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:49.140 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.140 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.140 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.045 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:51.045 00:12:51.045 real 0m21.521s 00:12:51.045 user 1m1.507s 00:12:51.045 sys 0m4.575s 00:12:51.045 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.045 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.045 ************************************ 00:12:51.045 END TEST nvmf_connect_disconnect 00:12:51.045 ************************************ 00:12:51.045 18:23:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:51.045 18:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:51.045 18:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.045 18:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.045 ************************************ 00:12:51.045 START TEST nvmf_multitarget 00:12:51.045 ************************************ 00:12:51.045 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:51.304 * Looking for test storage... 00:12:51.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.304 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:51.304 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:12:51.304 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:51.304 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:51.304 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:51.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.305 --rc genhtml_branch_coverage=1 00:12:51.305 --rc genhtml_function_coverage=1 00:12:51.305 --rc genhtml_legend=1 00:12:51.305 --rc geninfo_all_blocks=1 00:12:51.305 --rc geninfo_unexecuted_blocks=1 00:12:51.305 00:12:51.305 ' 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:51.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.305 --rc genhtml_branch_coverage=1 00:12:51.305 --rc genhtml_function_coverage=1 00:12:51.305 --rc genhtml_legend=1 00:12:51.305 --rc geninfo_all_blocks=1 00:12:51.305 --rc geninfo_unexecuted_blocks=1 00:12:51.305 00:12:51.305 ' 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:51.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.305 --rc genhtml_branch_coverage=1 00:12:51.305 --rc genhtml_function_coverage=1 00:12:51.305 --rc genhtml_legend=1 00:12:51.305 --rc geninfo_all_blocks=1 00:12:51.305 --rc geninfo_unexecuted_blocks=1 00:12:51.305 00:12:51.305 ' 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:51.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.305 --rc genhtml_branch_coverage=1 00:12:51.305 --rc genhtml_function_coverage=1 00:12:51.305 --rc genhtml_legend=1 00:12:51.305 --rc geninfo_all_blocks=1 00:12:51.305 --rc geninfo_unexecuted_blocks=1 00:12:51.305 00:12:51.305 ' 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.305 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:51.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:51.564 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.850 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:54.851 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:54.851 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:54.851 Found net devices under 0000:84:00.0: cvl_0_0 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:54.851 Found net devices under 0000:84:00.1: cvl_0_1 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:12:54.851 00:12:54.851 --- 10.0.0.2 ping statistics --- 00:12:54.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.851 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:12:54.851 00:12:54.851 --- 10.0.0.1 ping statistics --- 00:12:54.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.851 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=1149495 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 1149495 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1149495 ']' 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:54.851 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:54.851 [2024-10-08 18:23:22.905926] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:12:54.851 [2024-10-08 18:23:22.906039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.851 [2024-10-08 18:23:23.012460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.851 [2024-10-08 18:23:23.215851] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.851 [2024-10-08 18:23:23.215983] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.851 [2024-10-08 18:23:23.216022] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.851 [2024-10-08 18:23:23.216055] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.851 [2024-10-08 18:23:23.216084] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.851 [2024-10-08 18:23:23.219618] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.851 [2024-10-08 18:23:23.219683] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.851 [2024-10-08 18:23:23.219750] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.851 [2024-10-08 18:23:23.219753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.785 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:55.785 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:55.785 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:55.785 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:55.785 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:56.043 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.043 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:56.043 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:56.043 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:56.043 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:56.043 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:56.333 "nvmf_tgt_1" 00:12:56.334 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:56.334 "nvmf_tgt_2" 00:12:56.334 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:56.334 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:56.617 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:56.617 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:56.875 true 00:12:56.875 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:57.134 true 00:12:57.134 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:57.134 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.392 rmmod nvme_tcp 00:12:57.392 rmmod nvme_fabrics 00:12:57.392 rmmod nvme_keyring 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 1149495 ']' 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 1149495 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1149495 ']' 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1149495 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1149495 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:57.392 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1149495' 00:12:57.393 killing process with pid 1149495 00:12:57.393 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1149495 00:12:57.393 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1149495 00:12:57.958 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:57.958 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:57.958 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:57.958 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:57.958 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:12:57.958 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:57.958 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:12:57.958 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.958 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:57.958 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.958 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.958 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.858 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:59.858 00:12:59.858 real 0m8.720s 00:12:59.858 user 0m14.435s 00:12:59.858 sys 0m2.914s 00:12:59.858 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.858 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:59.858 ************************************ 00:12:59.858 END TEST nvmf_multitarget 00:12:59.858 ************************************ 00:12:59.858 18:23:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:59.858 18:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:59.859 18:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.859 18:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.859 ************************************ 00:12:59.859 START TEST nvmf_rpc 00:12:59.859 ************************************ 00:12:59.859 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:59.859 * Looking for test storage... 00:12:59.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.859 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:59.859 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:12:59.859 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:00.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.117 --rc genhtml_branch_coverage=1 00:13:00.117 --rc genhtml_function_coverage=1 00:13:00.117 --rc genhtml_legend=1 00:13:00.117 --rc geninfo_all_blocks=1 00:13:00.117 --rc geninfo_unexecuted_blocks=1 00:13:00.117 00:13:00.117 ' 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:00.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.117 --rc genhtml_branch_coverage=1 00:13:00.117 --rc genhtml_function_coverage=1 00:13:00.117 --rc genhtml_legend=1 00:13:00.117 --rc geninfo_all_blocks=1 00:13:00.117 --rc geninfo_unexecuted_blocks=1 00:13:00.117 00:13:00.117 ' 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:00.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.117 --rc genhtml_branch_coverage=1 00:13:00.117 --rc genhtml_function_coverage=1 00:13:00.117 --rc genhtml_legend=1 00:13:00.117 --rc geninfo_all_blocks=1 00:13:00.117 --rc geninfo_unexecuted_blocks=1 00:13:00.117 00:13:00.117 ' 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:00.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.117 --rc genhtml_branch_coverage=1 00:13:00.117 --rc genhtml_function_coverage=1 00:13:00.117 --rc genhtml_legend=1 00:13:00.117 --rc geninfo_all_blocks=1 00:13:00.117 --rc geninfo_unexecuted_blocks=1 00:13:00.117 00:13:00.117 ' 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.117 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:00.118 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.402 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:03.403 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:03.403 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:03.403 Found net devices under 0000:84:00.0: cvl_0_0 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:03.403 Found net devices under 0000:84:00.1: cvl_0_1 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:03.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:13:03.403 00:13:03.403 --- 10.0.0.2 ping statistics --- 00:13:03.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.403 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:13:03.403 00:13:03.403 --- 10.0.0.1 ping statistics --- 00:13:03.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.403 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=1151884 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 1151884 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1151884 ']' 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.403 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.403 [2024-10-08 18:23:31.656598] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:13:03.403 [2024-10-08 18:23:31.656798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.403 [2024-10-08 18:23:31.805782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.663 [2024-10-08 18:23:32.024781] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.663 [2024-10-08 18:23:32.024903] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.663 [2024-10-08 18:23:32.024939] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.663 [2024-10-08 18:23:32.024971] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.663 [2024-10-08 18:23:32.024998] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.663 [2024-10-08 18:23:32.028711] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.663 [2024-10-08 18:23:32.028778] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.663 [2024-10-08 18:23:32.028878] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.663 [2024-10-08 18:23:32.028882] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.663 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.663 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:03.663 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:03.663 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.663 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.663 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:03.921 "tick_rate": 2700000000, 00:13:03.921 "poll_groups": [ 00:13:03.921 { 00:13:03.921 "name": "nvmf_tgt_poll_group_000", 00:13:03.921 "admin_qpairs": 0, 00:13:03.921 "io_qpairs": 0, 00:13:03.921 "current_admin_qpairs": 0, 00:13:03.921 "current_io_qpairs": 0, 00:13:03.921 "pending_bdev_io": 0, 00:13:03.921 "completed_nvme_io": 0, 00:13:03.921 "transports": [] 00:13:03.921 }, 00:13:03.921 { 00:13:03.921 "name": "nvmf_tgt_poll_group_001", 00:13:03.921 "admin_qpairs": 0, 00:13:03.921 "io_qpairs": 0, 00:13:03.921 "current_admin_qpairs": 0, 00:13:03.921 "current_io_qpairs": 0, 00:13:03.921 "pending_bdev_io": 0, 00:13:03.921 "completed_nvme_io": 0, 00:13:03.921 "transports": [] 00:13:03.921 }, 00:13:03.921 { 00:13:03.921 "name": "nvmf_tgt_poll_group_002", 00:13:03.921 "admin_qpairs": 0, 00:13:03.921 "io_qpairs": 0, 00:13:03.921 "current_admin_qpairs": 0, 00:13:03.921 "current_io_qpairs": 0, 00:13:03.921 "pending_bdev_io": 0, 00:13:03.921 "completed_nvme_io": 0, 00:13:03.921 "transports": [] 00:13:03.921 }, 00:13:03.921 { 00:13:03.921 "name": "nvmf_tgt_poll_group_003", 00:13:03.921 "admin_qpairs": 0, 00:13:03.921 "io_qpairs": 0, 00:13:03.921 "current_admin_qpairs": 0, 00:13:03.921 "current_io_qpairs": 0, 00:13:03.921 "pending_bdev_io": 0, 00:13:03.921 "completed_nvme_io": 0, 00:13:03.921 "transports": [] 00:13:03.921 } 00:13:03.921 ] 00:13:03.921 }' 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.921 [2024-10-08 18:23:32.302844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.921 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:03.921 "tick_rate": 2700000000, 00:13:03.921 "poll_groups": [ 00:13:03.921 { 00:13:03.921 "name": "nvmf_tgt_poll_group_000", 00:13:03.921 "admin_qpairs": 0, 00:13:03.921 "io_qpairs": 0, 00:13:03.921 "current_admin_qpairs": 0, 00:13:03.921 "current_io_qpairs": 0, 00:13:03.921 "pending_bdev_io": 0, 00:13:03.921 "completed_nvme_io": 0, 00:13:03.921 "transports": [ 00:13:03.921 { 00:13:03.921 "trtype": "TCP" 00:13:03.921 } 00:13:03.921 ] 00:13:03.921 }, 00:13:03.921 { 00:13:03.921 "name": "nvmf_tgt_poll_group_001", 00:13:03.921 "admin_qpairs": 0, 00:13:03.921 "io_qpairs": 0, 00:13:03.921 "current_admin_qpairs": 0, 00:13:03.921 "current_io_qpairs": 0, 00:13:03.922 "pending_bdev_io": 0, 00:13:03.922 "completed_nvme_io": 0, 00:13:03.922 "transports": [ 00:13:03.922 { 00:13:03.922 "trtype": "TCP" 00:13:03.922 } 00:13:03.922 ] 00:13:03.922 }, 00:13:03.922 { 00:13:03.922 "name": "nvmf_tgt_poll_group_002", 00:13:03.922 "admin_qpairs": 0, 00:13:03.922 "io_qpairs": 0, 00:13:03.922 "current_admin_qpairs": 0, 00:13:03.922 "current_io_qpairs": 0, 00:13:03.922 "pending_bdev_io": 0, 00:13:03.922 "completed_nvme_io": 0, 00:13:03.922 "transports": [ 00:13:03.922 { 00:13:03.922 "trtype": "TCP" 00:13:03.922 } 00:13:03.922 ] 00:13:03.922 }, 00:13:03.922 { 00:13:03.922 "name": "nvmf_tgt_poll_group_003", 00:13:03.922 "admin_qpairs": 0, 00:13:03.922 "io_qpairs": 0, 00:13:03.922 "current_admin_qpairs": 0, 00:13:03.922 "current_io_qpairs": 0, 00:13:03.922 "pending_bdev_io": 0, 00:13:03.922 "completed_nvme_io": 0, 00:13:03.922 "transports": [ 00:13:03.922 { 00:13:03.922 "trtype": "TCP" 00:13:03.922 } 00:13:03.922 ] 00:13:03.922 } 00:13:03.922 ] 00:13:03.922 }' 00:13:03.922 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:03.922 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:03.922 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:03.922 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:03.922 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:03.922 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:03.922 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:03.922 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:03.922 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.181 Malloc1 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.181 [2024-10-08 18:23:32.519473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:04.181 [2024-10-08 18:23:32.542033] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:13:04.181 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:04.181 could not add new controller: failed to write to nvme-fabrics device 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.181 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.748 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.748 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.748 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.748 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:04.748 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:07.282 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.283 [2024-10-08 18:23:35.333855] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:13:07.283 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:07.283 could not add new controller: failed to write to nvme-fabrics device 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.283 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.540 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.540 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:07.540 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.540 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:07.540 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.075 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.076 [2024-10-08 18:23:38.204764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.076 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.643 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.643 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.643 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.643 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:10.643 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:12.549 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:12.549 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:12.549 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.549 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:12.549 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.549 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:12.549 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.549 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.549 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:12.549 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:12.549 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.549 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.550 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.550 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.550 [2024-10-08 18:23:41.047440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.550 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.550 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.550 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.550 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.550 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.550 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.550 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.550 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.550 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.550 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.488 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.488 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:13.488 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.488 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:13.488 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.390 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.391 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.391 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.391 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.391 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.649 [2024-10-08 18:23:43.928328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:15.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.215 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.215 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:16.215 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.215 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:16.215 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:18.117 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:18.117 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:18.117 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.376 [2024-10-08 18:23:46.812868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.376 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.943 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.943 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:18.943 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.943 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:18.943 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.477 [2024-10-08 18:23:49.562950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.477 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:22.043 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:22.043 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:22.043 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.043 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:22.043 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.948 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 [2024-10-08 18:23:52.429741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.949 [2024-10-08 18:23:52.477760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.949 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 [2024-10-08 18:23:52.525921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 [2024-10-08 18:23:52.574113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 [2024-10-08 18:23:52.622251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.208 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:24.209 "tick_rate": 2700000000, 00:13:24.209 "poll_groups": [ 00:13:24.209 { 00:13:24.209 "name": "nvmf_tgt_poll_group_000", 00:13:24.209 "admin_qpairs": 2, 00:13:24.209 "io_qpairs": 84, 00:13:24.209 "current_admin_qpairs": 0, 00:13:24.209 "current_io_qpairs": 0, 00:13:24.209 "pending_bdev_io": 0, 00:13:24.209 "completed_nvme_io": 237, 00:13:24.209 "transports": [ 00:13:24.209 { 00:13:24.209 "trtype": "TCP" 00:13:24.209 } 00:13:24.209 ] 00:13:24.209 }, 00:13:24.209 { 00:13:24.209 "name": "nvmf_tgt_poll_group_001", 00:13:24.209 "admin_qpairs": 2, 00:13:24.209 "io_qpairs": 84, 00:13:24.209 "current_admin_qpairs": 0, 00:13:24.209 "current_io_qpairs": 0, 00:13:24.209 "pending_bdev_io": 0, 00:13:24.209 "completed_nvme_io": 122, 00:13:24.209 "transports": [ 00:13:24.209 { 00:13:24.209 "trtype": "TCP" 00:13:24.209 } 00:13:24.209 ] 00:13:24.209 }, 00:13:24.209 { 00:13:24.209 "name": "nvmf_tgt_poll_group_002", 00:13:24.209 "admin_qpairs": 1, 00:13:24.209 "io_qpairs": 84, 00:13:24.209 "current_admin_qpairs": 0, 00:13:24.209 "current_io_qpairs": 0, 00:13:24.209 "pending_bdev_io": 0, 00:13:24.209 "completed_nvme_io": 155, 00:13:24.209 "transports": [ 00:13:24.209 { 00:13:24.209 "trtype": "TCP" 00:13:24.209 } 00:13:24.209 ] 00:13:24.209 }, 00:13:24.209 { 00:13:24.209 "name": "nvmf_tgt_poll_group_003", 00:13:24.209 "admin_qpairs": 2, 00:13:24.209 "io_qpairs": 84, 00:13:24.209 "current_admin_qpairs": 0, 00:13:24.209 "current_io_qpairs": 0, 00:13:24.209 "pending_bdev_io": 0, 00:13:24.209 "completed_nvme_io": 172, 00:13:24.209 "transports": [ 00:13:24.209 { 00:13:24.209 "trtype": "TCP" 00:13:24.209 } 00:13:24.209 ] 00:13:24.209 } 00:13:24.209 ] 00:13:24.209 }' 00:13:24.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:24.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:24.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:24.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:24.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:24.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:24.467 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:24.468 rmmod nvme_tcp 00:13:24.468 rmmod nvme_fabrics 00:13:24.468 rmmod nvme_keyring 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 1151884 ']' 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 1151884 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1151884 ']' 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1151884 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1151884 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1151884' 00:13:24.468 killing process with pid 1151884 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1151884 00:13:24.468 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1151884 00:13:25.036 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:25.036 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:25.036 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:25.036 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:25.036 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:13:25.036 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:25.036 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:13:25.036 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:25.036 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:25.036 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.036 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.036 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.940 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:26.940 00:13:26.940 real 0m27.086s 00:13:26.940 user 1m25.009s 00:13:26.940 sys 0m5.135s 00:13:26.940 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.940 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.940 ************************************ 00:13:26.940 END TEST nvmf_rpc 00:13:26.940 ************************************ 00:13:26.940 18:23:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:26.940 18:23:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:26.941 18:23:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.941 18:23:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.941 ************************************ 00:13:26.941 START TEST nvmf_invalid 00:13:26.941 ************************************ 00:13:26.941 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:27.199 * Looking for test storage... 00:13:27.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:27.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.199 --rc genhtml_branch_coverage=1 00:13:27.199 --rc genhtml_function_coverage=1 00:13:27.199 --rc genhtml_legend=1 00:13:27.199 --rc geninfo_all_blocks=1 00:13:27.199 --rc geninfo_unexecuted_blocks=1 00:13:27.199 00:13:27.199 ' 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:27.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.199 --rc genhtml_branch_coverage=1 00:13:27.199 --rc genhtml_function_coverage=1 00:13:27.199 --rc genhtml_legend=1 00:13:27.199 --rc geninfo_all_blocks=1 00:13:27.199 --rc geninfo_unexecuted_blocks=1 00:13:27.199 00:13:27.199 ' 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:27.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.199 --rc genhtml_branch_coverage=1 00:13:27.199 --rc genhtml_function_coverage=1 00:13:27.199 --rc genhtml_legend=1 00:13:27.199 --rc geninfo_all_blocks=1 00:13:27.199 --rc geninfo_unexecuted_blocks=1 00:13:27.199 00:13:27.199 ' 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:27.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.199 --rc genhtml_branch_coverage=1 00:13:27.199 --rc genhtml_function_coverage=1 00:13:27.199 --rc genhtml_legend=1 00:13:27.199 --rc geninfo_all_blocks=1 00:13:27.199 --rc geninfo_unexecuted_blocks=1 00:13:27.199 00:13:27.199 ' 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:27.199 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:27.200 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:30.486 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.486 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:30.486 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:30.486 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:30.486 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:30.486 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:30.486 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:30.486 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:30.486 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:30.486 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:30.486 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:30.486 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:30.486 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:30.487 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:30.487 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:30.487 Found net devices under 0000:84:00.0: cvl_0_0 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:30.487 Found net devices under 0000:84:00.1: cvl_0_1 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:30.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:13:30.487 00:13:30.487 --- 10.0.0.2 ping statistics --- 00:13:30.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.487 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:13:30.487 00:13:30.487 --- 10.0.0.1 ping statistics --- 00:13:30.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.487 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=1156518 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 1156518 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1156518 ']' 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.487 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:30.488 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.488 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:30.488 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:30.488 [2024-10-08 18:23:58.858137] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:13:30.488 [2024-10-08 18:23:58.858235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.488 [2024-10-08 18:23:58.933660] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.745 [2024-10-08 18:23:59.056019] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.745 [2024-10-08 18:23:59.056081] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.745 [2024-10-08 18:23:59.056098] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.745 [2024-10-08 18:23:59.056112] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.745 [2024-10-08 18:23:59.056125] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.745 [2024-10-08 18:23:59.058025] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.745 [2024-10-08 18:23:59.058083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.745 [2024-10-08 18:23:59.058137] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.745 [2024-10-08 18:23:59.058140] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.746 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:30.746 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:30.746 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:30.746 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:30.746 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:30.746 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.746 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:30.746 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28347 00:13:31.004 [2024-10-08 18:23:59.518918] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:31.004 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:31.004 { 00:13:31.004 "nqn": "nqn.2016-06.io.spdk:cnode28347", 00:13:31.004 "tgt_name": "foobar", 00:13:31.004 "method": "nvmf_create_subsystem", 00:13:31.004 "req_id": 1 00:13:31.004 } 00:13:31.004 Got JSON-RPC error response 00:13:31.004 response: 00:13:31.004 { 00:13:31.004 "code": -32603, 00:13:31.004 "message": "Unable to find target foobar" 00:13:31.004 }' 00:13:31.004 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:31.004 { 00:13:31.004 "nqn": "nqn.2016-06.io.spdk:cnode28347", 00:13:31.004 "tgt_name": "foobar", 00:13:31.004 "method": "nvmf_create_subsystem", 00:13:31.004 "req_id": 1 00:13:31.004 } 00:13:31.004 Got JSON-RPC error response 00:13:31.004 response: 00:13:31.004 { 00:13:31.004 "code": -32603, 00:13:31.004 "message": "Unable to find target foobar" 00:13:31.004 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:31.263 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:31.263 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18934 00:13:31.860 [2024-10-08 18:24:00.064847] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18934: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:31.860 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:31.860 { 00:13:31.860 "nqn": "nqn.2016-06.io.spdk:cnode18934", 00:13:31.860 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:31.860 "method": "nvmf_create_subsystem", 00:13:31.860 "req_id": 1 00:13:31.860 } 00:13:31.860 Got JSON-RPC error response 00:13:31.860 response: 00:13:31.860 { 00:13:31.860 "code": -32602, 00:13:31.860 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:31.860 }' 00:13:31.860 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:31.860 { 00:13:31.860 "nqn": "nqn.2016-06.io.spdk:cnode18934", 00:13:31.860 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:31.860 "method": "nvmf_create_subsystem", 00:13:31.860 "req_id": 1 00:13:31.860 } 00:13:31.860 Got JSON-RPC error response 00:13:31.860 response: 00:13:31.860 { 00:13:31.860 "code": -32602, 00:13:31.860 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:31.860 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:31.860 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:31.860 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16161 00:13:32.120 [2024-10-08 18:24:00.454129] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16161: invalid model number 'SPDK_Controller' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:32.120 { 00:13:32.120 "nqn": "nqn.2016-06.io.spdk:cnode16161", 00:13:32.120 "model_number": "SPDK_Controller\u001f", 00:13:32.120 "method": "nvmf_create_subsystem", 00:13:32.120 "req_id": 1 00:13:32.120 } 00:13:32.120 Got JSON-RPC error response 00:13:32.120 response: 00:13:32.120 { 00:13:32.120 "code": -32602, 00:13:32.120 "message": "Invalid MN SPDK_Controller\u001f" 00:13:32.120 }' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:32.120 { 00:13:32.120 "nqn": "nqn.2016-06.io.spdk:cnode16161", 00:13:32.120 "model_number": "SPDK_Controller\u001f", 00:13:32.120 "method": "nvmf_create_subsystem", 00:13:32.120 "req_id": 1 00:13:32.120 } 00:13:32.120 Got JSON-RPC error response 00:13:32.120 response: 00:13:32.120 { 00:13:32.120 "code": -32602, 00:13:32.120 "message": "Invalid MN SPDK_Controller\u001f" 00:13:32.120 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.120 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '27[XTm\xLCr>C5H!$=UV' 00:13:32.121 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '27[XTm\xLCr>C5H!$=UV' nqn.2016-06.io.spdk:cnode31795 00:13:32.689 [2024-10-08 18:24:01.092248] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31795: invalid serial number '27[XTm\xLCr>C5H!$=UV' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:32.689 { 00:13:32.689 "nqn": "nqn.2016-06.io.spdk:cnode31795", 00:13:32.689 "serial_number": "27[XTm\\xLCr>C5\u007fH!$=UV", 00:13:32.689 "method": "nvmf_create_subsystem", 00:13:32.689 "req_id": 1 00:13:32.689 } 00:13:32.689 Got JSON-RPC error response 00:13:32.689 response: 00:13:32.689 { 00:13:32.689 "code": -32602, 00:13:32.689 "message": "Invalid SN 27[XTm\\xLCr>C5\u007fH!$=UV" 00:13:32.689 }' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:32.689 { 00:13:32.689 "nqn": "nqn.2016-06.io.spdk:cnode31795", 00:13:32.689 "serial_number": "27[XTm\\xLCr>C5\u007fH!$=UV", 00:13:32.689 "method": "nvmf_create_subsystem", 00:13:32.689 "req_id": 1 00:13:32.689 } 00:13:32.689 Got JSON-RPC error response 00:13:32.689 response: 00:13:32.689 { 00:13:32.689 "code": -32602, 00:13:32.689 "message": "Invalid SN 27[XTm\\xLCr>C5\u007fH!$=UV" 00:13:32.689 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:32.689 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:32.690 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:32.949 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ == \- ]] 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ' $GZ}b)\lPUIjq=Z>dxM2u ZlFSNo5@q,KaI76APv' 00:13:32.950 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ' $GZ}b)\lPUIjq=Z>dxM2u ZlFSNo5@q,KaI76APv' nqn.2016-06.io.spdk:cnode28783 00:13:33.517 [2024-10-08 18:24:01.802601] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28783: invalid model number ' $GZ}b)\lPUIjq=Z>dxM2u ZlFSNo5@q,KaI76APv' 00:13:33.517 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:33.517 { 00:13:33.517 "nqn": "nqn.2016-06.io.spdk:cnode28783", 00:13:33.517 "model_number": " $GZ}b)\\lPUIjq=Z>dxM2u ZlFSNo5@q,KaI76APv", 00:13:33.517 "method": "nvmf_create_subsystem", 00:13:33.517 "req_id": 1 00:13:33.517 } 00:13:33.517 Got JSON-RPC error response 00:13:33.517 response: 00:13:33.517 { 00:13:33.517 "code": -32602, 00:13:33.517 "message": "Invalid MN $GZ}b)\\lPUIjq=Z>dxM2u ZlFSNo5@q,KaI76APv" 00:13:33.517 }' 00:13:33.517 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:33.517 { 00:13:33.517 "nqn": "nqn.2016-06.io.spdk:cnode28783", 00:13:33.517 "model_number": " $GZ}b)\\lPUIjq=Z>dxM2u ZlFSNo5@q,KaI76APv", 00:13:33.517 "method": "nvmf_create_subsystem", 00:13:33.517 "req_id": 1 00:13:33.517 } 00:13:33.517 Got JSON-RPC error response 00:13:33.517 response: 00:13:33.517 { 00:13:33.517 "code": -32602, 00:13:33.517 "message": "Invalid MN $GZ}b)\\lPUIjq=Z>dxM2u ZlFSNo5@q,KaI76APv" 00:13:33.517 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:33.517 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:33.792 [2024-10-08 18:24:02.316410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.088 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:34.350 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:34.350 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:34.350 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:34.350 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:34.350 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:34.916 [2024-10-08 18:24:03.179121] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:34.916 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:34.916 { 00:13:34.916 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:34.916 "listen_address": { 00:13:34.916 "trtype": "tcp", 00:13:34.916 "traddr": "", 00:13:34.916 "trsvcid": "4421" 00:13:34.916 }, 00:13:34.916 "method": "nvmf_subsystem_remove_listener", 00:13:34.916 "req_id": 1 00:13:34.916 } 00:13:34.916 Got JSON-RPC error response 00:13:34.916 response: 00:13:34.916 { 00:13:34.916 "code": -32602, 00:13:34.916 "message": "Invalid parameters" 00:13:34.916 }' 00:13:34.916 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:34.916 { 00:13:34.916 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:34.916 "listen_address": { 00:13:34.916 "trtype": "tcp", 00:13:34.916 "traddr": "", 00:13:34.916 "trsvcid": "4421" 00:13:34.916 }, 00:13:34.916 "method": "nvmf_subsystem_remove_listener", 00:13:34.916 "req_id": 1 00:13:34.916 } 00:13:34.916 Got JSON-RPC error response 00:13:34.916 response: 00:13:34.916 { 00:13:34.916 "code": -32602, 00:13:34.916 "message": "Invalid parameters" 00:13:34.916 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:34.916 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19036 -i 0 00:13:35.174 [2024-10-08 18:24:03.460025] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19036: invalid cntlid range [0-65519] 00:13:35.174 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:35.174 { 00:13:35.174 "nqn": "nqn.2016-06.io.spdk:cnode19036", 00:13:35.174 "min_cntlid": 0, 00:13:35.174 "method": "nvmf_create_subsystem", 00:13:35.174 "req_id": 1 00:13:35.174 } 00:13:35.174 Got JSON-RPC error response 00:13:35.174 response: 00:13:35.174 { 00:13:35.174 "code": -32602, 00:13:35.174 "message": "Invalid cntlid range [0-65519]" 00:13:35.174 }' 00:13:35.174 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:35.174 { 00:13:35.174 "nqn": "nqn.2016-06.io.spdk:cnode19036", 00:13:35.174 "min_cntlid": 0, 00:13:35.174 "method": "nvmf_create_subsystem", 00:13:35.174 "req_id": 1 00:13:35.174 } 00:13:35.174 Got JSON-RPC error response 00:13:35.174 response: 00:13:35.174 { 00:13:35.174 "code": -32602, 00:13:35.174 "message": "Invalid cntlid range [0-65519]" 00:13:35.174 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.174 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1747 -i 65520 00:13:35.740 [2024-10-08 18:24:04.009793] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1747: invalid cntlid range [65520-65519] 00:13:35.740 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:35.740 { 00:13:35.741 "nqn": "nqn.2016-06.io.spdk:cnode1747", 00:13:35.741 "min_cntlid": 65520, 00:13:35.741 "method": "nvmf_create_subsystem", 00:13:35.741 "req_id": 1 00:13:35.741 } 00:13:35.741 Got JSON-RPC error response 00:13:35.741 response: 00:13:35.741 { 00:13:35.741 "code": -32602, 00:13:35.741 "message": "Invalid cntlid range [65520-65519]" 00:13:35.741 }' 00:13:35.741 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:35.741 { 00:13:35.741 "nqn": "nqn.2016-06.io.spdk:cnode1747", 00:13:35.741 "min_cntlid": 65520, 00:13:35.741 "method": "nvmf_create_subsystem", 00:13:35.741 "req_id": 1 00:13:35.741 } 00:13:35.741 Got JSON-RPC error response 00:13:35.741 response: 00:13:35.741 { 00:13:35.741 "code": -32602, 00:13:35.741 "message": "Invalid cntlid range [65520-65519]" 00:13:35.741 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:35.741 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31486 -I 0 00:13:36.307 [2024-10-08 18:24:04.543550] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31486: invalid cntlid range [1-0] 00:13:36.307 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:36.307 { 00:13:36.307 "nqn": "nqn.2016-06.io.spdk:cnode31486", 00:13:36.307 "max_cntlid": 0, 00:13:36.307 "method": "nvmf_create_subsystem", 00:13:36.307 "req_id": 1 00:13:36.307 } 00:13:36.307 Got JSON-RPC error response 00:13:36.307 response: 00:13:36.307 { 00:13:36.307 "code": -32602, 00:13:36.307 "message": "Invalid cntlid range [1-0]" 00:13:36.307 }' 00:13:36.307 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:36.307 { 00:13:36.307 "nqn": "nqn.2016-06.io.spdk:cnode31486", 00:13:36.307 "max_cntlid": 0, 00:13:36.307 "method": "nvmf_create_subsystem", 00:13:36.307 "req_id": 1 00:13:36.307 } 00:13:36.307 Got JSON-RPC error response 00:13:36.307 response: 00:13:36.307 { 00:13:36.307 "code": -32602, 00:13:36.307 "message": "Invalid cntlid range [1-0]" 00:13:36.307 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:36.307 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27003 -I 65520 00:13:36.566 [2024-10-08 18:24:04.872626] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27003: invalid cntlid range [1-65520] 00:13:36.566 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:36.566 { 00:13:36.566 "nqn": "nqn.2016-06.io.spdk:cnode27003", 00:13:36.566 "max_cntlid": 65520, 00:13:36.566 "method": "nvmf_create_subsystem", 00:13:36.566 "req_id": 1 00:13:36.566 } 00:13:36.566 Got JSON-RPC error response 00:13:36.566 response: 00:13:36.566 { 00:13:36.566 "code": -32602, 00:13:36.566 "message": "Invalid cntlid range [1-65520]" 00:13:36.566 }' 00:13:36.566 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:36.566 { 00:13:36.566 "nqn": "nqn.2016-06.io.spdk:cnode27003", 00:13:36.566 "max_cntlid": 65520, 00:13:36.566 "method": "nvmf_create_subsystem", 00:13:36.566 "req_id": 1 00:13:36.566 } 00:13:36.566 Got JSON-RPC error response 00:13:36.566 response: 00:13:36.566 { 00:13:36.566 "code": -32602, 00:13:36.566 "message": "Invalid cntlid range [1-65520]" 00:13:36.566 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:36.566 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28587 -i 6 -I 5 00:13:37.132 [2024-10-08 18:24:05.422456] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28587: invalid cntlid range [6-5] 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:37.132 { 00:13:37.132 "nqn": "nqn.2016-06.io.spdk:cnode28587", 00:13:37.132 "min_cntlid": 6, 00:13:37.132 "max_cntlid": 5, 00:13:37.132 "method": "nvmf_create_subsystem", 00:13:37.132 "req_id": 1 00:13:37.132 } 00:13:37.132 Got JSON-RPC error response 00:13:37.132 response: 00:13:37.132 { 00:13:37.132 "code": -32602, 00:13:37.132 "message": "Invalid cntlid range [6-5]" 00:13:37.132 }' 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:37.132 { 00:13:37.132 "nqn": "nqn.2016-06.io.spdk:cnode28587", 00:13:37.132 "min_cntlid": 6, 00:13:37.132 "max_cntlid": 5, 00:13:37.132 "method": "nvmf_create_subsystem", 00:13:37.132 "req_id": 1 00:13:37.132 } 00:13:37.132 Got JSON-RPC error response 00:13:37.132 response: 00:13:37.132 { 00:13:37.132 "code": -32602, 00:13:37.132 "message": "Invalid cntlid range [6-5]" 00:13:37.132 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:37.132 { 00:13:37.132 "name": "foobar", 00:13:37.132 "method": "nvmf_delete_target", 00:13:37.132 "req_id": 1 00:13:37.132 } 00:13:37.132 Got JSON-RPC error response 00:13:37.132 response: 00:13:37.132 { 00:13:37.132 "code": -32602, 00:13:37.132 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:37.132 }' 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:37.132 { 00:13:37.132 "name": "foobar", 00:13:37.132 "method": "nvmf_delete_target", 00:13:37.132 "req_id": 1 00:13:37.132 } 00:13:37.132 Got JSON-RPC error response 00:13:37.132 response: 00:13:37.132 { 00:13:37.132 "code": -32602, 00:13:37.132 "message": "The specified target doesn't exist, cannot delete it." 00:13:37.132 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:37.132 rmmod nvme_tcp 00:13:37.132 rmmod nvme_fabrics 00:13:37.132 rmmod nvme_keyring 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 1156518 ']' 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 1156518 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1156518 ']' 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1156518 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.132 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1156518 00:13:37.390 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.390 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.390 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1156518' 00:13:37.390 killing process with pid 1156518 00:13:37.390 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1156518 00:13:37.390 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1156518 00:13:37.649 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:37.649 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:37.649 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:37.649 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:37.649 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:13:37.649 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:37.649 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:13:37.649 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:37.649 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:37.649 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.649 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.649 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:40.179 00:13:40.179 real 0m12.661s 00:13:40.179 user 0m33.936s 00:13:40.179 sys 0m3.624s 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:40.179 ************************************ 00:13:40.179 END TEST nvmf_invalid 00:13:40.179 ************************************ 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.179 ************************************ 00:13:40.179 START TEST nvmf_connect_stress 00:13:40.179 ************************************ 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:40.179 * Looking for test storage... 00:13:40.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:40.179 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:40.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.180 --rc genhtml_branch_coverage=1 00:13:40.180 --rc genhtml_function_coverage=1 00:13:40.180 --rc genhtml_legend=1 00:13:40.180 --rc geninfo_all_blocks=1 00:13:40.180 --rc geninfo_unexecuted_blocks=1 00:13:40.180 00:13:40.180 ' 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:40.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.180 --rc genhtml_branch_coverage=1 00:13:40.180 --rc genhtml_function_coverage=1 00:13:40.180 --rc genhtml_legend=1 00:13:40.180 --rc geninfo_all_blocks=1 00:13:40.180 --rc geninfo_unexecuted_blocks=1 00:13:40.180 00:13:40.180 ' 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:40.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.180 --rc genhtml_branch_coverage=1 00:13:40.180 --rc genhtml_function_coverage=1 00:13:40.180 --rc genhtml_legend=1 00:13:40.180 --rc geninfo_all_blocks=1 00:13:40.180 --rc geninfo_unexecuted_blocks=1 00:13:40.180 00:13:40.180 ' 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:40.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.180 --rc genhtml_branch_coverage=1 00:13:40.180 --rc genhtml_function_coverage=1 00:13:40.180 --rc genhtml_legend=1 00:13:40.180 --rc geninfo_all_blocks=1 00:13:40.180 --rc geninfo_unexecuted_blocks=1 00:13:40.180 00:13:40.180 ' 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:40.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:40.180 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:40.181 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:40.181 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.181 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.181 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.181 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:40.181 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:40.181 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:40.181 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:42.711 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.711 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:42.712 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:42.712 Found net devices under 0000:84:00.0: cvl_0_0 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:42.712 Found net devices under 0000:84:00.1: cvl_0_1 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:42.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:13:42.712 00:13:42.712 --- 10.0.0.2 ping statistics --- 00:13:42.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.712 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:13:42.712 00:13:42.712 --- 10.0.0.1 ping statistics --- 00:13:42.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.712 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=1160200 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 1160200 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1160200 ']' 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.712 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.971 [2024-10-08 18:24:11.266094] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:13:42.971 [2024-10-08 18:24:11.266190] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.971 [2024-10-08 18:24:11.355521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:42.971 [2024-10-08 18:24:11.493471] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.971 [2024-10-08 18:24:11.493545] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.971 [2024-10-08 18:24:11.493566] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.971 [2024-10-08 18:24:11.493583] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.971 [2024-10-08 18:24:11.493598] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.971 [2024-10-08 18:24:11.494885] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.971 [2024-10-08 18:24:11.494963] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.971 [2024-10-08 18:24:11.494967] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.346 [2024-10-08 18:24:12.663277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.346 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.347 [2024-10-08 18:24:12.690756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.347 NULL1 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1160362 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.347 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.606 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.606 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:44.606 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.606 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.606 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.863 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.864 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:45.123 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.123 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.123 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.381 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.381 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:45.381 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.381 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.381 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.640 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.640 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:45.640 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.640 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.640 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.898 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.898 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:45.898 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.898 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.898 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.156 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.156 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:46.157 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.157 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.157 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.727 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.727 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:46.727 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.727 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.727 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.987 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.987 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:46.987 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.987 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.987 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.246 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.246 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:47.246 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.246 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.246 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.504 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.504 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:47.504 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.504 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.504 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.763 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.763 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:47.763 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.763 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.763 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.329 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.329 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:48.329 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.329 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.329 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.588 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.588 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:48.588 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.588 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.588 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.847 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.847 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:48.847 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.847 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.847 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.105 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.105 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:49.105 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.105 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.105 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:49.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.933 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.933 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:49.933 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.933 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.933 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.191 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.191 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:50.191 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.191 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.191 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.449 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.449 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:50.449 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.449 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.449 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.709 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.709 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:50.709 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.709 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.709 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.968 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.968 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:50.968 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.968 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.968 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.536 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.536 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:51.536 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.536 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.536 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.793 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.793 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:51.793 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.793 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.793 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.052 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.052 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:52.052 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.052 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.052 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.310 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.310 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:52.310 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.310 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.310 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.879 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.879 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:52.879 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.879 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.879 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.139 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.139 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:53.139 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.139 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.139 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.398 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.398 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:53.398 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.398 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.398 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.657 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.657 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:53.657 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.657 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.657 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.917 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.917 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:53.917 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.917 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.917 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.487 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.487 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:54.487 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.487 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.487 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.487 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:54.747 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.747 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1160362 00:13:54.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1160362) - No such process 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1160362 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:54.748 rmmod nvme_tcp 00:13:54.748 rmmod nvme_fabrics 00:13:54.748 rmmod nvme_keyring 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 1160200 ']' 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 1160200 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1160200 ']' 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1160200 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1160200 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1160200' 00:13:54.748 killing process with pid 1160200 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1160200 00:13:54.748 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1160200 00:13:55.318 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:55.318 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:55.318 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:55.318 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:55.318 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:13:55.318 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:13:55.318 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:55.318 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:55.318 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:55.318 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.318 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.318 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.241 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:57.241 00:13:57.241 real 0m17.426s 00:13:57.241 user 0m42.380s 00:13:57.241 sys 0m6.918s 00:13:57.241 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.241 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.241 ************************************ 00:13:57.241 END TEST nvmf_connect_stress 00:13:57.241 ************************************ 00:13:57.241 18:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:57.241 18:24:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:57.241 18:24:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:57.241 18:24:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:57.241 ************************************ 00:13:57.241 START TEST nvmf_fused_ordering 00:13:57.241 ************************************ 00:13:57.241 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:57.241 * Looking for test storage... 00:13:57.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.241 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:57.241 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:13:57.241 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:57.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.502 --rc genhtml_branch_coverage=1 00:13:57.502 --rc genhtml_function_coverage=1 00:13:57.502 --rc genhtml_legend=1 00:13:57.502 --rc geninfo_all_blocks=1 00:13:57.502 --rc geninfo_unexecuted_blocks=1 00:13:57.502 00:13:57.502 ' 00:13:57.502 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:57.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.503 --rc genhtml_branch_coverage=1 00:13:57.503 --rc genhtml_function_coverage=1 00:13:57.503 --rc genhtml_legend=1 00:13:57.503 --rc geninfo_all_blocks=1 00:13:57.503 --rc geninfo_unexecuted_blocks=1 00:13:57.503 00:13:57.503 ' 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:57.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.503 --rc genhtml_branch_coverage=1 00:13:57.503 --rc genhtml_function_coverage=1 00:13:57.503 --rc genhtml_legend=1 00:13:57.503 --rc geninfo_all_blocks=1 00:13:57.503 --rc geninfo_unexecuted_blocks=1 00:13:57.503 00:13:57.503 ' 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:57.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.503 --rc genhtml_branch_coverage=1 00:13:57.503 --rc genhtml_function_coverage=1 00:13:57.503 --rc genhtml_legend=1 00:13:57.503 --rc geninfo_all_blocks=1 00:13:57.503 --rc geninfo_unexecuted_blocks=1 00:13:57.503 00:13:57.503 ' 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:57.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:57.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.797 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:00.798 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:00.798 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:00.798 Found net devices under 0000:84:00.0: cvl_0_0 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:00.798 Found net devices under 0000:84:00.1: cvl_0_1 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:00.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:14:00.798 00:14:00.798 --- 10.0.0.2 ping statistics --- 00:14:00.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.798 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:14:00.798 00:14:00.798 --- 10.0.0.1 ping statistics --- 00:14:00.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.798 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:00.798 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:00.798 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:00.798 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:00.798 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:00.798 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:00.798 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=1163671 00:14:00.798 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:00.798 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 1163671 00:14:00.798 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1163671 ']' 00:14:00.798 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.798 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.798 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.798 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.798 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:00.798 [2024-10-08 18:24:29.074063] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:14:00.798 [2024-10-08 18:24:29.074166] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.798 [2024-10-08 18:24:29.188079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.057 [2024-10-08 18:24:29.407927] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.057 [2024-10-08 18:24:29.408070] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.057 [2024-10-08 18:24:29.408109] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.057 [2024-10-08 18:24:29.408140] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.057 [2024-10-08 18:24:29.408166] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.057 [2024-10-08 18:24:29.409547] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.057 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:01.057 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:01.057 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:01.057 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:01.057 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.316 [2024-10-08 18:24:29.625331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.316 [2024-10-08 18:24:29.641571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.316 NULL1 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.316 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:01.316 [2024-10-08 18:24:29.705864] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:14:01.316 [2024-10-08 18:24:29.705959] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163795 ] 00:14:01.884 Attached to nqn.2016-06.io.spdk:cnode1 00:14:01.884 Namespace ID: 1 size: 1GB 00:14:01.884 fused_ordering(0) 00:14:01.884 fused_ordering(1) 00:14:01.884 fused_ordering(2) 00:14:01.884 fused_ordering(3) 00:14:01.884 fused_ordering(4) 00:14:01.884 fused_ordering(5) 00:14:01.884 fused_ordering(6) 00:14:01.884 fused_ordering(7) 00:14:01.884 fused_ordering(8) 00:14:01.884 fused_ordering(9) 00:14:01.884 fused_ordering(10) 00:14:01.884 fused_ordering(11) 00:14:01.884 fused_ordering(12) 00:14:01.884 fused_ordering(13) 00:14:01.884 fused_ordering(14) 00:14:01.884 fused_ordering(15) 00:14:01.884 fused_ordering(16) 00:14:01.884 fused_ordering(17) 00:14:01.884 fused_ordering(18) 00:14:01.884 fused_ordering(19) 00:14:01.884 fused_ordering(20) 00:14:01.884 fused_ordering(21) 00:14:01.884 fused_ordering(22) 00:14:01.884 fused_ordering(23) 00:14:01.884 fused_ordering(24) 00:14:01.884 fused_ordering(25) 00:14:01.884 fused_ordering(26) 00:14:01.884 fused_ordering(27) 00:14:01.884 fused_ordering(28) 00:14:01.884 fused_ordering(29) 00:14:01.884 fused_ordering(30) 00:14:01.884 fused_ordering(31) 00:14:01.884 fused_ordering(32) 00:14:01.884 fused_ordering(33) 00:14:01.884 fused_ordering(34) 00:14:01.884 fused_ordering(35) 00:14:01.884 fused_ordering(36) 00:14:01.884 fused_ordering(37) 00:14:01.884 fused_ordering(38) 00:14:01.884 fused_ordering(39) 00:14:01.884 fused_ordering(40) 00:14:01.884 fused_ordering(41) 00:14:01.884 fused_ordering(42) 00:14:01.884 fused_ordering(43) 00:14:01.884 fused_ordering(44) 00:14:01.884 fused_ordering(45) 00:14:01.884 fused_ordering(46) 00:14:01.884 fused_ordering(47) 00:14:01.884 fused_ordering(48) 00:14:01.884 fused_ordering(49) 00:14:01.884 fused_ordering(50) 00:14:01.884 fused_ordering(51) 00:14:01.884 fused_ordering(52) 00:14:01.884 fused_ordering(53) 00:14:01.884 fused_ordering(54) 00:14:01.884 fused_ordering(55) 00:14:01.884 fused_ordering(56) 00:14:01.884 fused_ordering(57) 00:14:01.884 fused_ordering(58) 00:14:01.884 fused_ordering(59) 00:14:01.884 fused_ordering(60) 00:14:01.884 fused_ordering(61) 00:14:01.884 fused_ordering(62) 00:14:01.884 fused_ordering(63) 00:14:01.884 fused_ordering(64) 00:14:01.884 fused_ordering(65) 00:14:01.884 fused_ordering(66) 00:14:01.884 fused_ordering(67) 00:14:01.884 fused_ordering(68) 00:14:01.884 fused_ordering(69) 00:14:01.884 fused_ordering(70) 00:14:01.884 fused_ordering(71) 00:14:01.884 fused_ordering(72) 00:14:01.884 fused_ordering(73) 00:14:01.884 fused_ordering(74) 00:14:01.884 fused_ordering(75) 00:14:01.884 fused_ordering(76) 00:14:01.884 fused_ordering(77) 00:14:01.884 fused_ordering(78) 00:14:01.884 fused_ordering(79) 00:14:01.884 fused_ordering(80) 00:14:01.884 fused_ordering(81) 00:14:01.884 fused_ordering(82) 00:14:01.884 fused_ordering(83) 00:14:01.884 fused_ordering(84) 00:14:01.884 fused_ordering(85) 00:14:01.884 fused_ordering(86) 00:14:01.884 fused_ordering(87) 00:14:01.884 fused_ordering(88) 00:14:01.884 fused_ordering(89) 00:14:01.884 fused_ordering(90) 00:14:01.884 fused_ordering(91) 00:14:01.884 fused_ordering(92) 00:14:01.884 fused_ordering(93) 00:14:01.884 fused_ordering(94) 00:14:01.884 fused_ordering(95) 00:14:01.884 fused_ordering(96) 00:14:01.884 fused_ordering(97) 00:14:01.884 fused_ordering(98) 00:14:01.884 fused_ordering(99) 00:14:01.884 fused_ordering(100) 00:14:01.884 fused_ordering(101) 00:14:01.884 fused_ordering(102) 00:14:01.884 fused_ordering(103) 00:14:01.884 fused_ordering(104) 00:14:01.884 fused_ordering(105) 00:14:01.884 fused_ordering(106) 00:14:01.884 fused_ordering(107) 00:14:01.884 fused_ordering(108) 00:14:01.884 fused_ordering(109) 00:14:01.884 fused_ordering(110) 00:14:01.884 fused_ordering(111) 00:14:01.884 fused_ordering(112) 00:14:01.884 fused_ordering(113) 00:14:01.884 fused_ordering(114) 00:14:01.884 fused_ordering(115) 00:14:01.884 fused_ordering(116) 00:14:01.884 fused_ordering(117) 00:14:01.884 fused_ordering(118) 00:14:01.884 fused_ordering(119) 00:14:01.884 fused_ordering(120) 00:14:01.884 fused_ordering(121) 00:14:01.884 fused_ordering(122) 00:14:01.884 fused_ordering(123) 00:14:01.884 fused_ordering(124) 00:14:01.884 fused_ordering(125) 00:14:01.884 fused_ordering(126) 00:14:01.884 fused_ordering(127) 00:14:01.884 fused_ordering(128) 00:14:01.884 fused_ordering(129) 00:14:01.884 fused_ordering(130) 00:14:01.884 fused_ordering(131) 00:14:01.884 fused_ordering(132) 00:14:01.884 fused_ordering(133) 00:14:01.884 fused_ordering(134) 00:14:01.884 fused_ordering(135) 00:14:01.884 fused_ordering(136) 00:14:01.884 fused_ordering(137) 00:14:01.885 fused_ordering(138) 00:14:01.885 fused_ordering(139) 00:14:01.885 fused_ordering(140) 00:14:01.885 fused_ordering(141) 00:14:01.885 fused_ordering(142) 00:14:01.885 fused_ordering(143) 00:14:01.885 fused_ordering(144) 00:14:01.885 fused_ordering(145) 00:14:01.885 fused_ordering(146) 00:14:01.885 fused_ordering(147) 00:14:01.885 fused_ordering(148) 00:14:01.885 fused_ordering(149) 00:14:01.885 fused_ordering(150) 00:14:01.885 fused_ordering(151) 00:14:01.885 fused_ordering(152) 00:14:01.885 fused_ordering(153) 00:14:01.885 fused_ordering(154) 00:14:01.885 fused_ordering(155) 00:14:01.885 fused_ordering(156) 00:14:01.885 fused_ordering(157) 00:14:01.885 fused_ordering(158) 00:14:01.885 fused_ordering(159) 00:14:01.885 fused_ordering(160) 00:14:01.885 fused_ordering(161) 00:14:01.885 fused_ordering(162) 00:14:01.885 fused_ordering(163) 00:14:01.885 fused_ordering(164) 00:14:01.885 fused_ordering(165) 00:14:01.885 fused_ordering(166) 00:14:01.885 fused_ordering(167) 00:14:01.885 fused_ordering(168) 00:14:01.885 fused_ordering(169) 00:14:01.885 fused_ordering(170) 00:14:01.885 fused_ordering(171) 00:14:01.885 fused_ordering(172) 00:14:01.885 fused_ordering(173) 00:14:01.885 fused_ordering(174) 00:14:01.885 fused_ordering(175) 00:14:01.885 fused_ordering(176) 00:14:01.885 fused_ordering(177) 00:14:01.885 fused_ordering(178) 00:14:01.885 fused_ordering(179) 00:14:01.885 fused_ordering(180) 00:14:01.885 fused_ordering(181) 00:14:01.885 fused_ordering(182) 00:14:01.885 fused_ordering(183) 00:14:01.885 fused_ordering(184) 00:14:01.885 fused_ordering(185) 00:14:01.885 fused_ordering(186) 00:14:01.885 fused_ordering(187) 00:14:01.885 fused_ordering(188) 00:14:01.885 fused_ordering(189) 00:14:01.885 fused_ordering(190) 00:14:01.885 fused_ordering(191) 00:14:01.885 fused_ordering(192) 00:14:01.885 fused_ordering(193) 00:14:01.885 fused_ordering(194) 00:14:01.885 fused_ordering(195) 00:14:01.885 fused_ordering(196) 00:14:01.885 fused_ordering(197) 00:14:01.885 fused_ordering(198) 00:14:01.885 fused_ordering(199) 00:14:01.885 fused_ordering(200) 00:14:01.885 fused_ordering(201) 00:14:01.885 fused_ordering(202) 00:14:01.885 fused_ordering(203) 00:14:01.885 fused_ordering(204) 00:14:01.885 fused_ordering(205) 00:14:02.820 fused_ordering(206) 00:14:02.820 fused_ordering(207) 00:14:02.820 fused_ordering(208) 00:14:02.820 fused_ordering(209) 00:14:02.820 fused_ordering(210) 00:14:02.820 fused_ordering(211) 00:14:02.820 fused_ordering(212) 00:14:02.820 fused_ordering(213) 00:14:02.820 fused_ordering(214) 00:14:02.820 fused_ordering(215) 00:14:02.820 fused_ordering(216) 00:14:02.820 fused_ordering(217) 00:14:02.820 fused_ordering(218) 00:14:02.820 fused_ordering(219) 00:14:02.820 fused_ordering(220) 00:14:02.820 fused_ordering(221) 00:14:02.820 fused_ordering(222) 00:14:02.820 fused_ordering(223) 00:14:02.820 fused_ordering(224) 00:14:02.820 fused_ordering(225) 00:14:02.820 fused_ordering(226) 00:14:02.820 fused_ordering(227) 00:14:02.820 fused_ordering(228) 00:14:02.820 fused_ordering(229) 00:14:02.820 fused_ordering(230) 00:14:02.820 fused_ordering(231) 00:14:02.820 fused_ordering(232) 00:14:02.820 fused_ordering(233) 00:14:02.820 fused_ordering(234) 00:14:02.820 fused_ordering(235) 00:14:02.820 fused_ordering(236) 00:14:02.820 fused_ordering(237) 00:14:02.820 fused_ordering(238) 00:14:02.820 fused_ordering(239) 00:14:02.820 fused_ordering(240) 00:14:02.820 fused_ordering(241) 00:14:02.820 fused_ordering(242) 00:14:02.820 fused_ordering(243) 00:14:02.820 fused_ordering(244) 00:14:02.820 fused_ordering(245) 00:14:02.820 fused_ordering(246) 00:14:02.820 fused_ordering(247) 00:14:02.820 fused_ordering(248) 00:14:02.820 fused_ordering(249) 00:14:02.820 fused_ordering(250) 00:14:02.820 fused_ordering(251) 00:14:02.820 fused_ordering(252) 00:14:02.820 fused_ordering(253) 00:14:02.820 fused_ordering(254) 00:14:02.820 fused_ordering(255) 00:14:02.820 fused_ordering(256) 00:14:02.820 fused_ordering(257) 00:14:02.820 fused_ordering(258) 00:14:02.820 fused_ordering(259) 00:14:02.820 fused_ordering(260) 00:14:02.820 fused_ordering(261) 00:14:02.820 fused_ordering(262) 00:14:02.820 fused_ordering(263) 00:14:02.820 fused_ordering(264) 00:14:02.820 fused_ordering(265) 00:14:02.820 fused_ordering(266) 00:14:02.820 fused_ordering(267) 00:14:02.820 fused_ordering(268) 00:14:02.820 fused_ordering(269) 00:14:02.820 fused_ordering(270) 00:14:02.820 fused_ordering(271) 00:14:02.820 fused_ordering(272) 00:14:02.820 fused_ordering(273) 00:14:02.820 fused_ordering(274) 00:14:02.820 fused_ordering(275) 00:14:02.820 fused_ordering(276) 00:14:02.820 fused_ordering(277) 00:14:02.820 fused_ordering(278) 00:14:02.820 fused_ordering(279) 00:14:02.820 fused_ordering(280) 00:14:02.820 fused_ordering(281) 00:14:02.820 fused_ordering(282) 00:14:02.820 fused_ordering(283) 00:14:02.820 fused_ordering(284) 00:14:02.820 fused_ordering(285) 00:14:02.820 fused_ordering(286) 00:14:02.820 fused_ordering(287) 00:14:02.820 fused_ordering(288) 00:14:02.820 fused_ordering(289) 00:14:02.820 fused_ordering(290) 00:14:02.820 fused_ordering(291) 00:14:02.820 fused_ordering(292) 00:14:02.820 fused_ordering(293) 00:14:02.820 fused_ordering(294) 00:14:02.820 fused_ordering(295) 00:14:02.820 fused_ordering(296) 00:14:02.820 fused_ordering(297) 00:14:02.820 fused_ordering(298) 00:14:02.820 fused_ordering(299) 00:14:02.820 fused_ordering(300) 00:14:02.820 fused_ordering(301) 00:14:02.820 fused_ordering(302) 00:14:02.820 fused_ordering(303) 00:14:02.820 fused_ordering(304) 00:14:02.820 fused_ordering(305) 00:14:02.820 fused_ordering(306) 00:14:02.820 fused_ordering(307) 00:14:02.820 fused_ordering(308) 00:14:02.820 fused_ordering(309) 00:14:02.820 fused_ordering(310) 00:14:02.820 fused_ordering(311) 00:14:02.820 fused_ordering(312) 00:14:02.820 fused_ordering(313) 00:14:02.820 fused_ordering(314) 00:14:02.820 fused_ordering(315) 00:14:02.820 fused_ordering(316) 00:14:02.820 fused_ordering(317) 00:14:02.820 fused_ordering(318) 00:14:02.820 fused_ordering(319) 00:14:02.820 fused_ordering(320) 00:14:02.820 fused_ordering(321) 00:14:02.820 fused_ordering(322) 00:14:02.820 fused_ordering(323) 00:14:02.820 fused_ordering(324) 00:14:02.820 fused_ordering(325) 00:14:02.820 fused_ordering(326) 00:14:02.820 fused_ordering(327) 00:14:02.820 fused_ordering(328) 00:14:02.820 fused_ordering(329) 00:14:02.820 fused_ordering(330) 00:14:02.820 fused_ordering(331) 00:14:02.820 fused_ordering(332) 00:14:02.820 fused_ordering(333) 00:14:02.820 fused_ordering(334) 00:14:02.820 fused_ordering(335) 00:14:02.820 fused_ordering(336) 00:14:02.820 fused_ordering(337) 00:14:02.820 fused_ordering(338) 00:14:02.820 fused_ordering(339) 00:14:02.820 fused_ordering(340) 00:14:02.820 fused_ordering(341) 00:14:02.820 fused_ordering(342) 00:14:02.820 fused_ordering(343) 00:14:02.820 fused_ordering(344) 00:14:02.820 fused_ordering(345) 00:14:02.820 fused_ordering(346) 00:14:02.820 fused_ordering(347) 00:14:02.820 fused_ordering(348) 00:14:02.820 fused_ordering(349) 00:14:02.820 fused_ordering(350) 00:14:02.820 fused_ordering(351) 00:14:02.820 fused_ordering(352) 00:14:02.820 fused_ordering(353) 00:14:02.820 fused_ordering(354) 00:14:02.820 fused_ordering(355) 00:14:02.820 fused_ordering(356) 00:14:02.820 fused_ordering(357) 00:14:02.820 fused_ordering(358) 00:14:02.820 fused_ordering(359) 00:14:02.820 fused_ordering(360) 00:14:02.820 fused_ordering(361) 00:14:02.820 fused_ordering(362) 00:14:02.820 fused_ordering(363) 00:14:02.820 fused_ordering(364) 00:14:02.820 fused_ordering(365) 00:14:02.820 fused_ordering(366) 00:14:02.820 fused_ordering(367) 00:14:02.820 fused_ordering(368) 00:14:02.820 fused_ordering(369) 00:14:02.820 fused_ordering(370) 00:14:02.820 fused_ordering(371) 00:14:02.820 fused_ordering(372) 00:14:02.820 fused_ordering(373) 00:14:02.820 fused_ordering(374) 00:14:02.820 fused_ordering(375) 00:14:02.820 fused_ordering(376) 00:14:02.820 fused_ordering(377) 00:14:02.820 fused_ordering(378) 00:14:02.820 fused_ordering(379) 00:14:02.820 fused_ordering(380) 00:14:02.820 fused_ordering(381) 00:14:02.820 fused_ordering(382) 00:14:02.820 fused_ordering(383) 00:14:02.820 fused_ordering(384) 00:14:02.820 fused_ordering(385) 00:14:02.820 fused_ordering(386) 00:14:02.820 fused_ordering(387) 00:14:02.820 fused_ordering(388) 00:14:02.820 fused_ordering(389) 00:14:02.820 fused_ordering(390) 00:14:02.820 fused_ordering(391) 00:14:02.820 fused_ordering(392) 00:14:02.821 fused_ordering(393) 00:14:02.821 fused_ordering(394) 00:14:02.821 fused_ordering(395) 00:14:02.821 fused_ordering(396) 00:14:02.821 fused_ordering(397) 00:14:02.821 fused_ordering(398) 00:14:02.821 fused_ordering(399) 00:14:02.821 fused_ordering(400) 00:14:02.821 fused_ordering(401) 00:14:02.821 fused_ordering(402) 00:14:02.821 fused_ordering(403) 00:14:02.821 fused_ordering(404) 00:14:02.821 fused_ordering(405) 00:14:02.821 fused_ordering(406) 00:14:02.821 fused_ordering(407) 00:14:02.821 fused_ordering(408) 00:14:02.821 fused_ordering(409) 00:14:02.821 fused_ordering(410) 00:14:03.758 fused_ordering(411) 00:14:03.758 fused_ordering(412) 00:14:03.758 fused_ordering(413) 00:14:03.758 fused_ordering(414) 00:14:03.758 fused_ordering(415) 00:14:03.758 fused_ordering(416) 00:14:03.758 fused_ordering(417) 00:14:03.758 fused_ordering(418) 00:14:03.758 fused_ordering(419) 00:14:03.758 fused_ordering(420) 00:14:03.758 fused_ordering(421) 00:14:03.758 fused_ordering(422) 00:14:03.758 fused_ordering(423) 00:14:03.758 fused_ordering(424) 00:14:03.758 fused_ordering(425) 00:14:03.758 fused_ordering(426) 00:14:03.758 fused_ordering(427) 00:14:03.758 fused_ordering(428) 00:14:03.758 fused_ordering(429) 00:14:03.758 fused_ordering(430) 00:14:03.758 fused_ordering(431) 00:14:03.758 fused_ordering(432) 00:14:03.758 fused_ordering(433) 00:14:03.758 fused_ordering(434) 00:14:03.758 fused_ordering(435) 00:14:03.758 fused_ordering(436) 00:14:03.758 fused_ordering(437) 00:14:03.758 fused_ordering(438) 00:14:03.758 fused_ordering(439) 00:14:03.758 fused_ordering(440) 00:14:03.758 fused_ordering(441) 00:14:03.758 fused_ordering(442) 00:14:03.758 fused_ordering(443) 00:14:03.758 fused_ordering(444) 00:14:03.758 fused_ordering(445) 00:14:03.758 fused_ordering(446) 00:14:03.758 fused_ordering(447) 00:14:03.758 fused_ordering(448) 00:14:03.758 fused_ordering(449) 00:14:03.758 fused_ordering(450) 00:14:03.758 fused_ordering(451) 00:14:03.758 fused_ordering(452) 00:14:03.758 fused_ordering(453) 00:14:03.758 fused_ordering(454) 00:14:03.758 fused_ordering(455) 00:14:03.758 fused_ordering(456) 00:14:03.758 fused_ordering(457) 00:14:03.758 fused_ordering(458) 00:14:03.758 fused_ordering(459) 00:14:03.758 fused_ordering(460) 00:14:03.758 fused_ordering(461) 00:14:03.758 fused_ordering(462) 00:14:03.758 fused_ordering(463) 00:14:03.758 fused_ordering(464) 00:14:03.758 fused_ordering(465) 00:14:03.758 fused_ordering(466) 00:14:03.758 fused_ordering(467) 00:14:03.758 fused_ordering(468) 00:14:03.758 fused_ordering(469) 00:14:03.758 fused_ordering(470) 00:14:03.758 fused_ordering(471) 00:14:03.758 fused_ordering(472) 00:14:03.758 fused_ordering(473) 00:14:03.758 fused_ordering(474) 00:14:03.758 fused_ordering(475) 00:14:03.758 fused_ordering(476) 00:14:03.758 fused_ordering(477) 00:14:03.758 fused_ordering(478) 00:14:03.758 fused_ordering(479) 00:14:03.758 fused_ordering(480) 00:14:03.758 fused_ordering(481) 00:14:03.758 fused_ordering(482) 00:14:03.758 fused_ordering(483) 00:14:03.758 fused_ordering(484) 00:14:03.758 fused_ordering(485) 00:14:03.758 fused_ordering(486) 00:14:03.758 fused_ordering(487) 00:14:03.758 fused_ordering(488) 00:14:03.758 fused_ordering(489) 00:14:03.758 fused_ordering(490) 00:14:03.758 fused_ordering(491) 00:14:03.758 fused_ordering(492) 00:14:03.758 fused_ordering(493) 00:14:03.758 fused_ordering(494) 00:14:03.758 fused_ordering(495) 00:14:03.758 fused_ordering(496) 00:14:03.758 fused_ordering(497) 00:14:03.758 fused_ordering(498) 00:14:03.758 fused_ordering(499) 00:14:03.758 fused_ordering(500) 00:14:03.758 fused_ordering(501) 00:14:03.758 fused_ordering(502) 00:14:03.758 fused_ordering(503) 00:14:03.758 fused_ordering(504) 00:14:03.758 fused_ordering(505) 00:14:03.758 fused_ordering(506) 00:14:03.758 fused_ordering(507) 00:14:03.758 fused_ordering(508) 00:14:03.758 fused_ordering(509) 00:14:03.758 fused_ordering(510) 00:14:03.758 fused_ordering(511) 00:14:03.758 fused_ordering(512) 00:14:03.758 fused_ordering(513) 00:14:03.758 fused_ordering(514) 00:14:03.758 fused_ordering(515) 00:14:03.758 fused_ordering(516) 00:14:03.758 fused_ordering(517) 00:14:03.758 fused_ordering(518) 00:14:03.758 fused_ordering(519) 00:14:03.758 fused_ordering(520) 00:14:03.758 fused_ordering(521) 00:14:03.758 fused_ordering(522) 00:14:03.758 fused_ordering(523) 00:14:03.758 fused_ordering(524) 00:14:03.758 fused_ordering(525) 00:14:03.758 fused_ordering(526) 00:14:03.758 fused_ordering(527) 00:14:03.758 fused_ordering(528) 00:14:03.758 fused_ordering(529) 00:14:03.758 fused_ordering(530) 00:14:03.758 fused_ordering(531) 00:14:03.758 fused_ordering(532) 00:14:03.758 fused_ordering(533) 00:14:03.758 fused_ordering(534) 00:14:03.758 fused_ordering(535) 00:14:03.758 fused_ordering(536) 00:14:03.758 fused_ordering(537) 00:14:03.758 fused_ordering(538) 00:14:03.758 fused_ordering(539) 00:14:03.758 fused_ordering(540) 00:14:03.758 fused_ordering(541) 00:14:03.758 fused_ordering(542) 00:14:03.758 fused_ordering(543) 00:14:03.758 fused_ordering(544) 00:14:03.758 fused_ordering(545) 00:14:03.758 fused_ordering(546) 00:14:03.758 fused_ordering(547) 00:14:03.758 fused_ordering(548) 00:14:03.758 fused_ordering(549) 00:14:03.758 fused_ordering(550) 00:14:03.758 fused_ordering(551) 00:14:03.758 fused_ordering(552) 00:14:03.758 fused_ordering(553) 00:14:03.758 fused_ordering(554) 00:14:03.758 fused_ordering(555) 00:14:03.758 fused_ordering(556) 00:14:03.758 fused_ordering(557) 00:14:03.758 fused_ordering(558) 00:14:03.758 fused_ordering(559) 00:14:03.758 fused_ordering(560) 00:14:03.758 fused_ordering(561) 00:14:03.758 fused_ordering(562) 00:14:03.758 fused_ordering(563) 00:14:03.758 fused_ordering(564) 00:14:03.758 fused_ordering(565) 00:14:03.758 fused_ordering(566) 00:14:03.758 fused_ordering(567) 00:14:03.758 fused_ordering(568) 00:14:03.758 fused_ordering(569) 00:14:03.758 fused_ordering(570) 00:14:03.758 fused_ordering(571) 00:14:03.758 fused_ordering(572) 00:14:03.758 fused_ordering(573) 00:14:03.758 fused_ordering(574) 00:14:03.758 fused_ordering(575) 00:14:03.758 fused_ordering(576) 00:14:03.758 fused_ordering(577) 00:14:03.758 fused_ordering(578) 00:14:03.758 fused_ordering(579) 00:14:03.758 fused_ordering(580) 00:14:03.758 fused_ordering(581) 00:14:03.758 fused_ordering(582) 00:14:03.758 fused_ordering(583) 00:14:03.758 fused_ordering(584) 00:14:03.758 fused_ordering(585) 00:14:03.758 fused_ordering(586) 00:14:03.758 fused_ordering(587) 00:14:03.758 fused_ordering(588) 00:14:03.758 fused_ordering(589) 00:14:03.758 fused_ordering(590) 00:14:03.758 fused_ordering(591) 00:14:03.758 fused_ordering(592) 00:14:03.758 fused_ordering(593) 00:14:03.758 fused_ordering(594) 00:14:03.758 fused_ordering(595) 00:14:03.758 fused_ordering(596) 00:14:03.758 fused_ordering(597) 00:14:03.758 fused_ordering(598) 00:14:03.758 fused_ordering(599) 00:14:03.758 fused_ordering(600) 00:14:03.758 fused_ordering(601) 00:14:03.758 fused_ordering(602) 00:14:03.758 fused_ordering(603) 00:14:03.758 fused_ordering(604) 00:14:03.758 fused_ordering(605) 00:14:03.758 fused_ordering(606) 00:14:03.758 fused_ordering(607) 00:14:03.758 fused_ordering(608) 00:14:03.758 fused_ordering(609) 00:14:03.758 fused_ordering(610) 00:14:03.758 fused_ordering(611) 00:14:03.758 fused_ordering(612) 00:14:03.758 fused_ordering(613) 00:14:03.758 fused_ordering(614) 00:14:03.758 fused_ordering(615) 00:14:05.136 fused_ordering(616) 00:14:05.136 fused_ordering(617) 00:14:05.136 fused_ordering(618) 00:14:05.136 fused_ordering(619) 00:14:05.136 fused_ordering(620) 00:14:05.136 fused_ordering(621) 00:14:05.136 fused_ordering(622) 00:14:05.136 fused_ordering(623) 00:14:05.136 fused_ordering(624) 00:14:05.136 fused_ordering(625) 00:14:05.136 fused_ordering(626) 00:14:05.136 fused_ordering(627) 00:14:05.136 fused_ordering(628) 00:14:05.136 fused_ordering(629) 00:14:05.136 fused_ordering(630) 00:14:05.136 fused_ordering(631) 00:14:05.136 fused_ordering(632) 00:14:05.136 fused_ordering(633) 00:14:05.136 fused_ordering(634) 00:14:05.136 fused_ordering(635) 00:14:05.137 fused_ordering(636) 00:14:05.137 fused_ordering(637) 00:14:05.137 fused_ordering(638) 00:14:05.137 fused_ordering(639) 00:14:05.137 fused_ordering(640) 00:14:05.137 fused_ordering(641) 00:14:05.137 fused_ordering(642) 00:14:05.137 fused_ordering(643) 00:14:05.137 fused_ordering(644) 00:14:05.137 fused_ordering(645) 00:14:05.137 fused_ordering(646) 00:14:05.137 fused_ordering(647) 00:14:05.137 fused_ordering(648) 00:14:05.137 fused_ordering(649) 00:14:05.137 fused_ordering(650) 00:14:05.137 fused_ordering(651) 00:14:05.137 fused_ordering(652) 00:14:05.137 fused_ordering(653) 00:14:05.137 fused_ordering(654) 00:14:05.137 fused_ordering(655) 00:14:05.137 fused_ordering(656) 00:14:05.137 fused_ordering(657) 00:14:05.137 fused_ordering(658) 00:14:05.137 fused_ordering(659) 00:14:05.137 fused_ordering(660) 00:14:05.137 fused_ordering(661) 00:14:05.137 fused_ordering(662) 00:14:05.137 fused_ordering(663) 00:14:05.137 fused_ordering(664) 00:14:05.137 fused_ordering(665) 00:14:05.137 fused_ordering(666) 00:14:05.137 fused_ordering(667) 00:14:05.137 fused_ordering(668) 00:14:05.137 fused_ordering(669) 00:14:05.137 fused_ordering(670) 00:14:05.137 fused_ordering(671) 00:14:05.137 fused_ordering(672) 00:14:05.137 fused_ordering(673) 00:14:05.137 fused_ordering(674) 00:14:05.137 fused_ordering(675) 00:14:05.137 fused_ordering(676) 00:14:05.137 fused_ordering(677) 00:14:05.137 fused_ordering(678) 00:14:05.137 fused_ordering(679) 00:14:05.137 fused_ordering(680) 00:14:05.137 fused_ordering(681) 00:14:05.137 fused_ordering(682) 00:14:05.137 fused_ordering(683) 00:14:05.137 fused_ordering(684) 00:14:05.137 fused_ordering(685) 00:14:05.137 fused_ordering(686) 00:14:05.137 fused_ordering(687) 00:14:05.137 fused_ordering(688) 00:14:05.137 fused_ordering(689) 00:14:05.137 fused_ordering(690) 00:14:05.137 fused_ordering(691) 00:14:05.137 fused_ordering(692) 00:14:05.137 fused_ordering(693) 00:14:05.137 fused_ordering(694) 00:14:05.137 fused_ordering(695) 00:14:05.137 fused_ordering(696) 00:14:05.137 fused_ordering(697) 00:14:05.137 fused_ordering(698) 00:14:05.137 fused_ordering(699) 00:14:05.137 fused_ordering(700) 00:14:05.137 fused_ordering(701) 00:14:05.137 fused_ordering(702) 00:14:05.137 fused_ordering(703) 00:14:05.137 fused_ordering(704) 00:14:05.137 fused_ordering(705) 00:14:05.137 fused_ordering(706) 00:14:05.137 fused_ordering(707) 00:14:05.137 fused_ordering(708) 00:14:05.137 fused_ordering(709) 00:14:05.137 fused_ordering(710) 00:14:05.137 fused_ordering(711) 00:14:05.137 fused_ordering(712) 00:14:05.137 fused_ordering(713) 00:14:05.137 fused_ordering(714) 00:14:05.137 fused_ordering(715) 00:14:05.137 fused_ordering(716) 00:14:05.137 fused_ordering(717) 00:14:05.137 fused_ordering(718) 00:14:05.137 fused_ordering(719) 00:14:05.137 fused_ordering(720) 00:14:05.137 fused_ordering(721) 00:14:05.137 fused_ordering(722) 00:14:05.137 fused_ordering(723) 00:14:05.137 fused_ordering(724) 00:14:05.137 fused_ordering(725) 00:14:05.137 fused_ordering(726) 00:14:05.137 fused_ordering(727) 00:14:05.137 fused_ordering(728) 00:14:05.137 fused_ordering(729) 00:14:05.137 fused_ordering(730) 00:14:05.137 fused_ordering(731) 00:14:05.137 fused_ordering(732) 00:14:05.137 fused_ordering(733) 00:14:05.137 fused_ordering(734) 00:14:05.137 fused_ordering(735) 00:14:05.137 fused_ordering(736) 00:14:05.137 fused_ordering(737) 00:14:05.137 fused_ordering(738) 00:14:05.137 fused_ordering(739) 00:14:05.137 fused_ordering(740) 00:14:05.137 fused_ordering(741) 00:14:05.137 fused_ordering(742) 00:14:05.137 fused_ordering(743) 00:14:05.137 fused_ordering(744) 00:14:05.137 fused_ordering(745) 00:14:05.137 fused_ordering(746) 00:14:05.137 fused_ordering(747) 00:14:05.137 fused_ordering(748) 00:14:05.137 fused_ordering(749) 00:14:05.137 fused_ordering(750) 00:14:05.137 fused_ordering(751) 00:14:05.137 fused_ordering(752) 00:14:05.137 fused_ordering(753) 00:14:05.137 fused_ordering(754) 00:14:05.137 fused_ordering(755) 00:14:05.137 fused_ordering(756) 00:14:05.137 fused_ordering(757) 00:14:05.137 fused_ordering(758) 00:14:05.137 fused_ordering(759) 00:14:05.137 fused_ordering(760) 00:14:05.137 fused_ordering(761) 00:14:05.137 fused_ordering(762) 00:14:05.137 fused_ordering(763) 00:14:05.137 fused_ordering(764) 00:14:05.137 fused_ordering(765) 00:14:05.137 fused_ordering(766) 00:14:05.137 fused_ordering(767) 00:14:05.137 fused_ordering(768) 00:14:05.137 fused_ordering(769) 00:14:05.137 fused_ordering(770) 00:14:05.137 fused_ordering(771) 00:14:05.137 fused_ordering(772) 00:14:05.137 fused_ordering(773) 00:14:05.137 fused_ordering(774) 00:14:05.137 fused_ordering(775) 00:14:05.137 fused_ordering(776) 00:14:05.137 fused_ordering(777) 00:14:05.137 fused_ordering(778) 00:14:05.137 fused_ordering(779) 00:14:05.137 fused_ordering(780) 00:14:05.137 fused_ordering(781) 00:14:05.137 fused_ordering(782) 00:14:05.137 fused_ordering(783) 00:14:05.137 fused_ordering(784) 00:14:05.137 fused_ordering(785) 00:14:05.137 fused_ordering(786) 00:14:05.137 fused_ordering(787) 00:14:05.137 fused_ordering(788) 00:14:05.137 fused_ordering(789) 00:14:05.137 fused_ordering(790) 00:14:05.137 fused_ordering(791) 00:14:05.137 fused_ordering(792) 00:14:05.137 fused_ordering(793) 00:14:05.137 fused_ordering(794) 00:14:05.137 fused_ordering(795) 00:14:05.137 fused_ordering(796) 00:14:05.137 fused_ordering(797) 00:14:05.137 fused_ordering(798) 00:14:05.137 fused_ordering(799) 00:14:05.137 fused_ordering(800) 00:14:05.137 fused_ordering(801) 00:14:05.137 fused_ordering(802) 00:14:05.137 fused_ordering(803) 00:14:05.137 fused_ordering(804) 00:14:05.137 fused_ordering(805) 00:14:05.137 fused_ordering(806) 00:14:05.137 fused_ordering(807) 00:14:05.137 fused_ordering(808) 00:14:05.137 fused_ordering(809) 00:14:05.137 fused_ordering(810) 00:14:05.137 fused_ordering(811) 00:14:05.137 fused_ordering(812) 00:14:05.137 fused_ordering(813) 00:14:05.137 fused_ordering(814) 00:14:05.137 fused_ordering(815) 00:14:05.137 fused_ordering(816) 00:14:05.137 fused_ordering(817) 00:14:05.137 fused_ordering(818) 00:14:05.137 fused_ordering(819) 00:14:05.137 fused_ordering(820) 00:14:06.075 fused_ordering(821) 00:14:06.075 fused_ordering(822) 00:14:06.075 fused_ordering(823) 00:14:06.075 fused_ordering(824) 00:14:06.075 fused_ordering(825) 00:14:06.075 fused_ordering(826) 00:14:06.075 fused_ordering(827) 00:14:06.075 fused_ordering(828) 00:14:06.075 fused_ordering(829) 00:14:06.075 fused_ordering(830) 00:14:06.075 fused_ordering(831) 00:14:06.075 fused_ordering(832) 00:14:06.075 fused_ordering(833) 00:14:06.075 fused_ordering(834) 00:14:06.075 fused_ordering(835) 00:14:06.075 fused_ordering(836) 00:14:06.075 fused_ordering(837) 00:14:06.075 fused_ordering(838) 00:14:06.075 fused_ordering(839) 00:14:06.075 fused_ordering(840) 00:14:06.075 fused_ordering(841) 00:14:06.075 fused_ordering(842) 00:14:06.075 fused_ordering(843) 00:14:06.075 fused_ordering(844) 00:14:06.075 fused_ordering(845) 00:14:06.075 fused_ordering(846) 00:14:06.075 fused_ordering(847) 00:14:06.075 fused_ordering(848) 00:14:06.075 fused_ordering(849) 00:14:06.076 fused_ordering(850) 00:14:06.076 fused_ordering(851) 00:14:06.076 fused_ordering(852) 00:14:06.076 fused_ordering(853) 00:14:06.076 fused_ordering(854) 00:14:06.076 fused_ordering(855) 00:14:06.076 fused_ordering(856) 00:14:06.076 fused_ordering(857) 00:14:06.076 fused_ordering(858) 00:14:06.076 fused_ordering(859) 00:14:06.076 fused_ordering(860) 00:14:06.076 fused_ordering(861) 00:14:06.076 fused_ordering(862) 00:14:06.076 fused_ordering(863) 00:14:06.076 fused_ordering(864) 00:14:06.076 fused_ordering(865) 00:14:06.076 fused_ordering(866) 00:14:06.076 fused_ordering(867) 00:14:06.076 fused_ordering(868) 00:14:06.076 fused_ordering(869) 00:14:06.076 fused_ordering(870) 00:14:06.076 fused_ordering(871) 00:14:06.076 fused_ordering(872) 00:14:06.076 fused_ordering(873) 00:14:06.076 fused_ordering(874) 00:14:06.076 fused_ordering(875) 00:14:06.076 fused_ordering(876) 00:14:06.076 fused_ordering(877) 00:14:06.076 fused_ordering(878) 00:14:06.076 fused_ordering(879) 00:14:06.076 fused_ordering(880) 00:14:06.076 fused_ordering(881) 00:14:06.076 fused_ordering(882) 00:14:06.076 fused_ordering(883) 00:14:06.076 fused_ordering(884) 00:14:06.076 fused_ordering(885) 00:14:06.076 fused_ordering(886) 00:14:06.076 fused_ordering(887) 00:14:06.076 fused_ordering(888) 00:14:06.076 fused_ordering(889) 00:14:06.076 fused_ordering(890) 00:14:06.076 fused_ordering(891) 00:14:06.076 fused_ordering(892) 00:14:06.076 fused_ordering(893) 00:14:06.076 fused_ordering(894) 00:14:06.076 fused_ordering(895) 00:14:06.076 fused_ordering(896) 00:14:06.076 fused_ordering(897) 00:14:06.076 fused_ordering(898) 00:14:06.076 fused_ordering(899) 00:14:06.076 fused_ordering(900) 00:14:06.076 fused_ordering(901) 00:14:06.076 fused_ordering(902) 00:14:06.076 fused_ordering(903) 00:14:06.076 fused_ordering(904) 00:14:06.076 fused_ordering(905) 00:14:06.076 fused_ordering(906) 00:14:06.076 fused_ordering(907) 00:14:06.076 fused_ordering(908) 00:14:06.076 fused_ordering(909) 00:14:06.076 fused_ordering(910) 00:14:06.076 fused_ordering(911) 00:14:06.076 fused_ordering(912) 00:14:06.076 fused_ordering(913) 00:14:06.076 fused_ordering(914) 00:14:06.076 fused_ordering(915) 00:14:06.076 fused_ordering(916) 00:14:06.076 fused_ordering(917) 00:14:06.076 fused_ordering(918) 00:14:06.076 fused_ordering(919) 00:14:06.076 fused_ordering(920) 00:14:06.076 fused_ordering(921) 00:14:06.076 fused_ordering(922) 00:14:06.076 fused_ordering(923) 00:14:06.076 fused_ordering(924) 00:14:06.076 fused_ordering(925) 00:14:06.076 fused_ordering(926) 00:14:06.076 fused_ordering(927) 00:14:06.076 fused_ordering(928) 00:14:06.076 fused_ordering(929) 00:14:06.076 fused_ordering(930) 00:14:06.076 fused_ordering(931) 00:14:06.076 fused_ordering(932) 00:14:06.076 fused_ordering(933) 00:14:06.076 fused_ordering(934) 00:14:06.076 fused_ordering(935) 00:14:06.076 fused_ordering(936) 00:14:06.076 fused_ordering(937) 00:14:06.076 fused_ordering(938) 00:14:06.076 fused_ordering(939) 00:14:06.076 fused_ordering(940) 00:14:06.076 fused_ordering(941) 00:14:06.076 fused_ordering(942) 00:14:06.076 fused_ordering(943) 00:14:06.076 fused_ordering(944) 00:14:06.076 fused_ordering(945) 00:14:06.076 fused_ordering(946) 00:14:06.076 fused_ordering(947) 00:14:06.076 fused_ordering(948) 00:14:06.076 fused_ordering(949) 00:14:06.076 fused_ordering(950) 00:14:06.076 fused_ordering(951) 00:14:06.076 fused_ordering(952) 00:14:06.076 fused_ordering(953) 00:14:06.076 fused_ordering(954) 00:14:06.076 fused_ordering(955) 00:14:06.076 fused_ordering(956) 00:14:06.076 fused_ordering(957) 00:14:06.076 fused_ordering(958) 00:14:06.076 fused_ordering(959) 00:14:06.076 fused_ordering(960) 00:14:06.076 fused_ordering(961) 00:14:06.076 fused_ordering(962) 00:14:06.076 fused_ordering(963) 00:14:06.076 fused_ordering(964) 00:14:06.076 fused_ordering(965) 00:14:06.076 fused_ordering(966) 00:14:06.076 fused_ordering(967) 00:14:06.076 fused_ordering(968) 00:14:06.076 fused_ordering(969) 00:14:06.076 fused_ordering(970) 00:14:06.076 fused_ordering(971) 00:14:06.076 fused_ordering(972) 00:14:06.076 fused_ordering(973) 00:14:06.076 fused_ordering(974) 00:14:06.076 fused_ordering(975) 00:14:06.076 fused_ordering(976) 00:14:06.076 fused_ordering(977) 00:14:06.076 fused_ordering(978) 00:14:06.076 fused_ordering(979) 00:14:06.076 fused_ordering(980) 00:14:06.076 fused_ordering(981) 00:14:06.076 fused_ordering(982) 00:14:06.076 fused_ordering(983) 00:14:06.076 fused_ordering(984) 00:14:06.076 fused_ordering(985) 00:14:06.076 fused_ordering(986) 00:14:06.076 fused_ordering(987) 00:14:06.076 fused_ordering(988) 00:14:06.076 fused_ordering(989) 00:14:06.076 fused_ordering(990) 00:14:06.076 fused_ordering(991) 00:14:06.076 fused_ordering(992) 00:14:06.076 fused_ordering(993) 00:14:06.076 fused_ordering(994) 00:14:06.076 fused_ordering(995) 00:14:06.076 fused_ordering(996) 00:14:06.076 fused_ordering(997) 00:14:06.076 fused_ordering(998) 00:14:06.076 fused_ordering(999) 00:14:06.076 fused_ordering(1000) 00:14:06.076 fused_ordering(1001) 00:14:06.076 fused_ordering(1002) 00:14:06.076 fused_ordering(1003) 00:14:06.076 fused_ordering(1004) 00:14:06.076 fused_ordering(1005) 00:14:06.076 fused_ordering(1006) 00:14:06.076 fused_ordering(1007) 00:14:06.076 fused_ordering(1008) 00:14:06.076 fused_ordering(1009) 00:14:06.076 fused_ordering(1010) 00:14:06.076 fused_ordering(1011) 00:14:06.076 fused_ordering(1012) 00:14:06.076 fused_ordering(1013) 00:14:06.076 fused_ordering(1014) 00:14:06.076 fused_ordering(1015) 00:14:06.076 fused_ordering(1016) 00:14:06.076 fused_ordering(1017) 00:14:06.076 fused_ordering(1018) 00:14:06.076 fused_ordering(1019) 00:14:06.076 fused_ordering(1020) 00:14:06.076 fused_ordering(1021) 00:14:06.076 fused_ordering(1022) 00:14:06.076 fused_ordering(1023) 00:14:06.076 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:06.076 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:06.076 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:06.076 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:06.076 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.076 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:06.076 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.076 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.076 rmmod nvme_tcp 00:14:06.076 rmmod nvme_fabrics 00:14:06.076 rmmod nvme_keyring 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 1163671 ']' 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 1163671 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1163671 ']' 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1163671 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1163671 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1163671' 00:14:06.077 killing process with pid 1163671 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1163671 00:14:06.077 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1163671 00:14:06.652 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:06.652 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:06.652 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:06.652 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:06.652 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:14:06.652 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:06.652 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:14:06.652 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.652 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:06.652 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.652 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.652 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.608 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:08.608 00:14:08.608 real 0m11.337s 00:14:08.608 user 0m9.202s 00:14:08.608 sys 0m5.497s 00:14:08.608 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.608 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:08.608 ************************************ 00:14:08.608 END TEST nvmf_fused_ordering 00:14:08.608 ************************************ 00:14:08.608 18:24:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:08.608 18:24:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:08.608 18:24:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.608 18:24:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.608 ************************************ 00:14:08.608 START TEST nvmf_ns_masking 00:14:08.608 ************************************ 00:14:08.608 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:08.608 * Looking for test storage... 00:14:08.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.608 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:08.608 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:14:08.608 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:08.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.869 --rc genhtml_branch_coverage=1 00:14:08.869 --rc genhtml_function_coverage=1 00:14:08.869 --rc genhtml_legend=1 00:14:08.869 --rc geninfo_all_blocks=1 00:14:08.869 --rc geninfo_unexecuted_blocks=1 00:14:08.869 00:14:08.869 ' 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:08.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.869 --rc genhtml_branch_coverage=1 00:14:08.869 --rc genhtml_function_coverage=1 00:14:08.869 --rc genhtml_legend=1 00:14:08.869 --rc geninfo_all_blocks=1 00:14:08.869 --rc geninfo_unexecuted_blocks=1 00:14:08.869 00:14:08.869 ' 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:08.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.869 --rc genhtml_branch_coverage=1 00:14:08.869 --rc genhtml_function_coverage=1 00:14:08.869 --rc genhtml_legend=1 00:14:08.869 --rc geninfo_all_blocks=1 00:14:08.869 --rc geninfo_unexecuted_blocks=1 00:14:08.869 00:14:08.869 ' 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:08.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.869 --rc genhtml_branch_coverage=1 00:14:08.869 --rc genhtml_function_coverage=1 00:14:08.869 --rc genhtml_legend=1 00:14:08.869 --rc geninfo_all_blocks=1 00:14:08.869 --rc geninfo_unexecuted_blocks=1 00:14:08.869 00:14:08.869 ' 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.869 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=84081128-f888-4aa2-8d50-851382258425 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ca31abaa-9f88-48d4-9960-5d79a9e7dfa7 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=45f6f323-aa57-45f8-b4f6-5cabdec08531 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:08.870 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:12.162 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:12.162 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:12.162 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:12.163 Found net devices under 0000:84:00.0: cvl_0_0 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:12.163 Found net devices under 0000:84:00.1: cvl_0_1 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:12.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:14:12.163 00:14:12.163 --- 10.0.0.2 ping statistics --- 00:14:12.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.163 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:14:12.163 00:14:12.163 --- 10.0.0.1 ping statistics --- 00:14:12.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.163 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=1166415 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 1166415 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1166415 ']' 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.163 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:12.163 [2024-10-08 18:24:40.373170] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:14:12.163 [2024-10-08 18:24:40.373333] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.163 [2024-10-08 18:24:40.511806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.424 [2024-10-08 18:24:40.729143] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.424 [2024-10-08 18:24:40.729196] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.424 [2024-10-08 18:24:40.729213] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.424 [2024-10-08 18:24:40.729228] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.424 [2024-10-08 18:24:40.729239] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.424 [2024-10-08 18:24:40.729964] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.424 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.424 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:12.424 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:12.424 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:12.424 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:12.684 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.684 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:13.254 [2024-10-08 18:24:41.611850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.254 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:13.254 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:13.254 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:13.823 Malloc1 00:14:13.823 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:14.392 Malloc2 00:14:14.392 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:14.961 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:15.897 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.465 [2024-10-08 18:24:44.798352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.465 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:16.465 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 45f6f323-aa57-45f8-b4f6-5cabdec08531 -a 10.0.0.2 -s 4420 -i 4 00:14:16.724 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.724 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:16.724 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.724 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:16.724 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:18.631 [ 0]:0x1 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.631 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.892 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=85eab2893a6a4c30bcb34918dcacb94e 00:14:18.892 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 85eab2893a6a4c30bcb34918dcacb94e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.892 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.462 [ 0]:0x1 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=85eab2893a6a4c30bcb34918dcacb94e 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 85eab2893a6a4c30bcb34918dcacb94e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:19.462 [ 1]:0x2 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21fd6afd485a49b2b2222cf6bc579ae0 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21fd6afd485a49b2b2222cf6bc579ae0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:19.462 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.721 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.291 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:20.867 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:20.867 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 45f6f323-aa57-45f8-b4f6-5cabdec08531 -a 10.0.0.2 -s 4420 -i 4 00:14:20.867 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:20.867 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:20.867 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.867 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:20.867 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:20.867 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:23.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:23.410 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:23.410 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:23.410 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:23.410 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.410 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:23.410 [ 0]:0x2 00:14:23.410 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.410 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.410 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21fd6afd485a49b2b2222cf6bc579ae0 00:14:23.410 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21fd6afd485a49b2b2222cf6bc579ae0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.410 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.669 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:23.669 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.669 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:23.669 [ 0]:0x1 00:14:23.669 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.669 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.669 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=85eab2893a6a4c30bcb34918dcacb94e 00:14:23.670 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 85eab2893a6a4c30bcb34918dcacb94e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.670 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:23.670 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:23.670 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:23.670 [ 1]:0x2 00:14:23.670 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.670 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:23.928 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21fd6afd485a49b2b2222cf6bc579ae0 00:14:23.928 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21fd6afd485a49b2b2222cf6bc579ae0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.928 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:24.186 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:24.186 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:24.186 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:24.186 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:24.186 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:24.187 [ 0]:0x2 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21fd6afd485a49b2b2222cf6bc579ae0 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21fd6afd485a49b2b2222cf6bc579ae0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:24.187 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:24.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.445 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:24.705 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:24.705 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 45f6f323-aa57-45f8-b4f6-5cabdec08531 -a 10.0.0.2 -s 4420 -i 4 00:14:24.965 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:24.965 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:24.965 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.965 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:24.965 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:24.965 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.876 [ 0]:0x1 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.876 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.135 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=85eab2893a6a4c30bcb34918dcacb94e 00:14:27.136 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 85eab2893a6a4c30bcb34918dcacb94e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.136 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:27.136 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.136 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:27.136 [ 1]:0x2 00:14:27.136 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:27.136 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.136 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21fd6afd485a49b2b2222cf6bc579ae0 00:14:27.136 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21fd6afd485a49b2b2222cf6bc579ae0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.136 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:27.394 [ 0]:0x2 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:27.394 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21fd6afd485a49b2b2222cf6bc579ae0 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21fd6afd485a49b2b2222cf6bc579ae0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:27.653 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:27.912 [2024-10-08 18:24:56.255666] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:27.912 request: 00:14:27.912 { 00:14:27.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.912 "nsid": 2, 00:14:27.912 "host": "nqn.2016-06.io.spdk:host1", 00:14:27.912 "method": "nvmf_ns_remove_host", 00:14:27.912 "req_id": 1 00:14:27.912 } 00:14:27.912 Got JSON-RPC error response 00:14:27.912 response: 00:14:27.912 { 00:14:27.912 "code": -32602, 00:14:27.912 "message": "Invalid parameters" 00:14:27.912 } 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:27.912 [ 0]:0x2 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:27.912 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.171 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21fd6afd485a49b2b2222cf6bc579ae0 00:14:28.171 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21fd6afd485a49b2b2222cf6bc579ae0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.171 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:28.171 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.171 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1168431 00:14:28.171 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:28.172 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.172 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1168431 /var/tmp/host.sock 00:14:28.172 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1168431 ']' 00:14:28.172 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:28.172 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:28.172 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:28.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:28.172 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:28.172 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:28.172 [2024-10-08 18:24:56.654592] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:14:28.172 [2024-10-08 18:24:56.654683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168431 ] 00:14:28.432 [2024-10-08 18:24:56.757146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.691 [2024-10-08 18:24:56.976931] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.631 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:29.631 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:29.631 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.631 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:30.201 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 84081128-f888-4aa2-8d50-851382258425 00:14:30.201 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:30.201 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 84081128F8884AA28D50851382258425 -i 00:14:31.141 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ca31abaa-9f88-48d4-9960-5d79a9e7dfa7 00:14:31.141 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:31.141 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g CA31ABAA9F8848D499605D79A9E7DFA7 -i 00:14:31.399 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:31.657 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:31.917 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:31.917 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:32.855 nvme0n1 00:14:32.855 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:32.855 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:33.424 nvme1n2 00:14:33.424 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:33.424 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:33.424 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:33.424 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:33.424 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:33.992 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:33.992 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:33.993 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:33.993 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:34.563 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 84081128-f888-4aa2-8d50-851382258425 == \8\4\0\8\1\1\2\8\-\f\8\8\8\-\4\a\a\2\-\8\d\5\0\-\8\5\1\3\8\2\2\5\8\4\2\5 ]] 00:14:34.563 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:34.563 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:34.563 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:34.823 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ca31abaa-9f88-48d4-9960-5d79a9e7dfa7 == \c\a\3\1\a\b\a\a\-\9\f\8\8\-\4\8\d\4\-\9\9\6\0\-\5\d\7\9\a\9\e\7\d\f\a\7 ]] 00:14:34.823 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1168431 00:14:34.823 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1168431 ']' 00:14:34.823 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1168431 00:14:34.823 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:34.823 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:34.823 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1168431 00:14:34.823 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:34.823 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:34.823 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1168431' 00:14:34.823 killing process with pid 1168431 00:14:34.823 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1168431 00:14:34.823 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1168431 00:14:35.762 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.762 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:35.762 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:35.762 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:35.762 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:35.762 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:35.762 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:35.762 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:35.762 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:35.762 rmmod nvme_tcp 00:14:36.022 rmmod nvme_fabrics 00:14:36.022 rmmod nvme_keyring 00:14:36.022 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:36.022 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:36.022 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:36.022 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 1166415 ']' 00:14:36.022 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 1166415 00:14:36.022 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1166415 ']' 00:14:36.022 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1166415 00:14:36.022 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:36.022 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:36.022 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1166415 00:14:36.022 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:36.022 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:36.022 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1166415' 00:14:36.023 killing process with pid 1166415 00:14:36.023 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1166415 00:14:36.023 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1166415 00:14:36.592 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:36.592 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:36.592 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:36.592 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:36.592 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:14:36.592 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:36.592 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:14:36.592 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:36.592 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:36.592 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.592 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.592 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.498 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:38.498 00:14:38.498 real 0m29.947s 00:14:38.498 user 0m44.456s 00:14:38.498 sys 0m6.250s 00:14:38.498 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:38.498 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:38.498 ************************************ 00:14:38.498 END TEST nvmf_ns_masking 00:14:38.498 ************************************ 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:38.759 ************************************ 00:14:38.759 START TEST nvmf_nvme_cli 00:14:38.759 ************************************ 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:38.759 * Looking for test storage... 00:14:38.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:38.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.759 --rc genhtml_branch_coverage=1 00:14:38.759 --rc genhtml_function_coverage=1 00:14:38.759 --rc genhtml_legend=1 00:14:38.759 --rc geninfo_all_blocks=1 00:14:38.759 --rc geninfo_unexecuted_blocks=1 00:14:38.759 00:14:38.759 ' 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:38.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.759 --rc genhtml_branch_coverage=1 00:14:38.759 --rc genhtml_function_coverage=1 00:14:38.759 --rc genhtml_legend=1 00:14:38.759 --rc geninfo_all_blocks=1 00:14:38.759 --rc geninfo_unexecuted_blocks=1 00:14:38.759 00:14:38.759 ' 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:38.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.759 --rc genhtml_branch_coverage=1 00:14:38.759 --rc genhtml_function_coverage=1 00:14:38.759 --rc genhtml_legend=1 00:14:38.759 --rc geninfo_all_blocks=1 00:14:38.759 --rc geninfo_unexecuted_blocks=1 00:14:38.759 00:14:38.759 ' 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:38.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.759 --rc genhtml_branch_coverage=1 00:14:38.759 --rc genhtml_function_coverage=1 00:14:38.759 --rc genhtml_legend=1 00:14:38.759 --rc geninfo_all_blocks=1 00:14:38.759 --rc geninfo_unexecuted_blocks=1 00:14:38.759 00:14:38.759 ' 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.759 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:39.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:39.020 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:39.021 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:39.021 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.021 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:39.021 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:39.021 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:39.021 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.021 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.021 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.021 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:39.021 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:39.021 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:39.021 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.593 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:41.594 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:41.594 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:41.594 Found net devices under 0000:84:00.0: cvl_0_0 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:41.594 Found net devices under 0000:84:00.1: cvl_0_1 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.594 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.594 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.594 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.594 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:41.594 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.594 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.594 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.594 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:41.594 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:41.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:14:41.594 00:14:41.594 --- 10.0.0.2 ping statistics --- 00:14:41.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.594 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:14:41.594 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:14:41.880 00:14:41.880 --- 10.0.0.1 ping statistics --- 00:14:41.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.880 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=1171456 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 1171456 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1171456 ']' 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.880 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.880 [2024-10-08 18:25:10.244018] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:14:41.880 [2024-10-08 18:25:10.244113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.880 [2024-10-08 18:25:10.357030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.164 [2024-10-08 18:25:10.582961] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.164 [2024-10-08 18:25:10.583066] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.164 [2024-10-08 18:25:10.583104] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.164 [2024-10-08 18:25:10.583133] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.164 [2024-10-08 18:25:10.583159] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.164 [2024-10-08 18:25:10.586711] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.164 [2024-10-08 18:25:10.586775] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.164 [2024-10-08 18:25:10.586880] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.164 [2024-10-08 18:25:10.586884] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.101 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.101 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:43.101 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:43.101 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:43.101 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.361 [2024-10-08 18:25:11.644434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.361 Malloc0 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.361 Malloc1 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.361 [2024-10-08 18:25:11.730297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:14:43.361 00:14:43.361 Discovery Log Number of Records 2, Generation counter 2 00:14:43.361 =====Discovery Log Entry 0====== 00:14:43.361 trtype: tcp 00:14:43.361 adrfam: ipv4 00:14:43.361 subtype: current discovery subsystem 00:14:43.361 treq: not required 00:14:43.361 portid: 0 00:14:43.361 trsvcid: 4420 00:14:43.361 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:43.361 traddr: 10.0.0.2 00:14:43.361 eflags: explicit discovery connections, duplicate discovery information 00:14:43.361 sectype: none 00:14:43.361 =====Discovery Log Entry 1====== 00:14:43.361 trtype: tcp 00:14:43.361 adrfam: ipv4 00:14:43.361 subtype: nvme subsystem 00:14:43.361 treq: not required 00:14:43.361 portid: 0 00:14:43.361 trsvcid: 4420 00:14:43.361 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:43.361 traddr: 10.0.0.2 00:14:43.361 eflags: none 00:14:43.361 sectype: none 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:43.361 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:44.299 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:44.299 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:44.299 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.299 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:44.299 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:44.299 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:46.208 /dev/nvme0n2 ]] 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:46.208 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:46.469 rmmod nvme_tcp 00:14:46.469 rmmod nvme_fabrics 00:14:46.469 rmmod nvme_keyring 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 1171456 ']' 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 1171456 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1171456 ']' 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1171456 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1171456 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1171456' 00:14:46.469 killing process with pid 1171456 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1171456 00:14:46.469 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1171456 00:14:47.038 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:47.038 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:47.038 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:47.038 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:47.039 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:14:47.039 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:47.039 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:14:47.039 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:47.039 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:47.039 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.039 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.039 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.948 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:48.948 00:14:48.948 real 0m10.275s 00:14:48.948 user 0m19.751s 00:14:48.948 sys 0m3.058s 00:14:48.948 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.948 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.948 ************************************ 00:14:48.948 END TEST nvmf_nvme_cli 00:14:48.948 ************************************ 00:14:48.948 18:25:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:48.948 18:25:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:48.948 18:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:48.948 18:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:48.948 18:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.948 ************************************ 00:14:48.948 START TEST nvmf_vfio_user 00:14:48.948 ************************************ 00:14:48.948 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:49.208 * Looking for test storage... 00:14:49.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:49.208 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:49.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.470 --rc genhtml_branch_coverage=1 00:14:49.470 --rc genhtml_function_coverage=1 00:14:49.470 --rc genhtml_legend=1 00:14:49.470 --rc geninfo_all_blocks=1 00:14:49.470 --rc geninfo_unexecuted_blocks=1 00:14:49.470 00:14:49.470 ' 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:49.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.470 --rc genhtml_branch_coverage=1 00:14:49.470 --rc genhtml_function_coverage=1 00:14:49.470 --rc genhtml_legend=1 00:14:49.470 --rc geninfo_all_blocks=1 00:14:49.470 --rc geninfo_unexecuted_blocks=1 00:14:49.470 00:14:49.470 ' 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:49.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.470 --rc genhtml_branch_coverage=1 00:14:49.470 --rc genhtml_function_coverage=1 00:14:49.470 --rc genhtml_legend=1 00:14:49.470 --rc geninfo_all_blocks=1 00:14:49.470 --rc geninfo_unexecuted_blocks=1 00:14:49.470 00:14:49.470 ' 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:49.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.470 --rc genhtml_branch_coverage=1 00:14:49.470 --rc genhtml_function_coverage=1 00:14:49.470 --rc genhtml_legend=1 00:14:49.470 --rc geninfo_all_blocks=1 00:14:49.470 --rc geninfo_unexecuted_blocks=1 00:14:49.470 00:14:49.470 ' 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:49.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1172405 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1172405' 00:14:49.470 Process pid: 1172405 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1172405 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1172405 ']' 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:49.470 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:49.470 [2024-10-08 18:25:17.880194] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:14:49.470 [2024-10-08 18:25:17.880372] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.731 [2024-10-08 18:25:18.069425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.992 [2024-10-08 18:25:18.365130] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.992 [2024-10-08 18:25:18.365263] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.992 [2024-10-08 18:25:18.365331] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.992 [2024-10-08 18:25:18.365396] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.992 [2024-10-08 18:25:18.365451] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.992 [2024-10-08 18:25:18.370584] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.992 [2024-10-08 18:25:18.370731] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.992 [2024-10-08 18:25:18.370768] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.992 [2024-10-08 18:25:18.370777] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.252 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.252 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:50.252 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:51.189 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:51.450 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:51.450 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:51.450 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:51.450 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:51.450 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:52.017 Malloc1 00:14:52.278 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:52.846 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:53.415 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:53.983 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:53.983 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:53.983 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:54.241 Malloc2 00:14:54.241 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:54.810 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:55.068 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:55.638 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:55.638 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:55.638 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:55.638 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:55.638 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:55.638 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:55.638 [2024-10-08 18:25:24.125248] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:14:55.638 [2024-10-08 18:25:24.125297] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173101 ] 00:14:55.638 [2024-10-08 18:25:24.168380] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:55.638 [2024-10-08 18:25:24.175682] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:55.638 [2024-10-08 18:25:24.175729] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7470a54000 00:14:55.899 [2024-10-08 18:25:24.176066] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.899 [2024-10-08 18:25:24.177064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.899 [2024-10-08 18:25:24.178064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.899 [2024-10-08 18:25:24.179073] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:55.899 [2024-10-08 18:25:24.180077] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:55.899 [2024-10-08 18:25:24.181083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.899 [2024-10-08 18:25:24.182084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:55.899 [2024-10-08 18:25:24.183091] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.899 [2024-10-08 18:25:24.184099] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:55.899 [2024-10-08 18:25:24.184120] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7470a49000 00:14:55.899 [2024-10-08 18:25:24.185276] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:55.899 [2024-10-08 18:25:24.202986] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:55.899 [2024-10-08 18:25:24.203024] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:55.899 [2024-10-08 18:25:24.205230] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:55.899 [2024-10-08 18:25:24.205281] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:55.899 [2024-10-08 18:25:24.205368] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:55.899 [2024-10-08 18:25:24.205397] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:55.899 [2024-10-08 18:25:24.205407] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:55.899 [2024-10-08 18:25:24.206223] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:55.899 [2024-10-08 18:25:24.206242] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:55.899 [2024-10-08 18:25:24.206254] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:55.899 [2024-10-08 18:25:24.207229] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:55.899 [2024-10-08 18:25:24.207248] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:55.899 [2024-10-08 18:25:24.207262] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:55.899 [2024-10-08 18:25:24.208241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:55.899 [2024-10-08 18:25:24.208259] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:55.899 [2024-10-08 18:25:24.209234] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:55.899 [2024-10-08 18:25:24.209252] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:55.899 [2024-10-08 18:25:24.209261] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:55.899 [2024-10-08 18:25:24.209272] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:55.899 [2024-10-08 18:25:24.209381] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:55.899 [2024-10-08 18:25:24.209389] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:55.899 [2024-10-08 18:25:24.209397] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:55.899 [2024-10-08 18:25:24.210242] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:55.899 [2024-10-08 18:25:24.211245] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:55.899 [2024-10-08 18:25:24.212248] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:55.899 [2024-10-08 18:25:24.213243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:55.899 [2024-10-08 18:25:24.213381] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:55.899 [2024-10-08 18:25:24.214258] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:55.899 [2024-10-08 18:25:24.214276] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:55.899 [2024-10-08 18:25:24.214285] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:55.899 [2024-10-08 18:25:24.214308] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:55.899 [2024-10-08 18:25:24.214326] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:55.899 [2024-10-08 18:25:24.214347] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:55.899 [2024-10-08 18:25:24.214356] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:55.899 [2024-10-08 18:25:24.214363] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.899 [2024-10-08 18:25:24.214381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:55.899 [2024-10-08 18:25:24.214459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:55.899 [2024-10-08 18:25:24.214475] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:55.899 [2024-10-08 18:25:24.214483] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:55.899 [2024-10-08 18:25:24.214489] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:55.899 [2024-10-08 18:25:24.214497] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:55.899 [2024-10-08 18:25:24.214504] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:55.899 [2024-10-08 18:25:24.214511] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:55.899 [2024-10-08 18:25:24.214519] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:55.899 [2024-10-08 18:25:24.214534] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:55.899 [2024-10-08 18:25:24.214550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:55.899 [2024-10-08 18:25:24.214569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:55.899 [2024-10-08 18:25:24.214585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.899 [2024-10-08 18:25:24.214597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.899 [2024-10-08 18:25:24.214608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.899 [2024-10-08 18:25:24.214620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.899 [2024-10-08 18:25:24.214646] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:55.899 [2024-10-08 18:25:24.214671] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:55.899 [2024-10-08 18:25:24.214686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:55.899 [2024-10-08 18:25:24.214714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:55.899 [2024-10-08 18:25:24.214725] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:55.899 [2024-10-08 18:25:24.214734] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:55.899 [2024-10-08 18:25:24.214745] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:55.899 [2024-10-08 18:25:24.214760] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:55.899 [2024-10-08 18:25:24.214775] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:55.899 [2024-10-08 18:25:24.214787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:55.899 [2024-10-08 18:25:24.214852] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:55.900 [2024-10-08 18:25:24.214867] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:55.900 [2024-10-08 18:25:24.214881] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:55.900 [2024-10-08 18:25:24.214889] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:55.900 [2024-10-08 18:25:24.214895] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.900 [2024-10-08 18:25:24.214905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:55.900 [2024-10-08 18:25:24.214924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:55.900 [2024-10-08 18:25:24.214955] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:55.900 [2024-10-08 18:25:24.214980] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:55.900 [2024-10-08 18:25:24.214994] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:55.900 [2024-10-08 18:25:24.215006] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:55.900 [2024-10-08 18:25:24.215029] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:55.900 [2024-10-08 18:25:24.215034] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.900 [2024-10-08 18:25:24.215044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:55.900 [2024-10-08 18:25:24.215077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:55.900 [2024-10-08 18:25:24.215098] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:55.900 [2024-10-08 18:25:24.215117] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:55.900 [2024-10-08 18:25:24.215129] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:55.900 [2024-10-08 18:25:24.215137] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:55.900 [2024-10-08 18:25:24.215143] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.900 [2024-10-08 18:25:24.215152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:55.900 [2024-10-08 18:25:24.215163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:55.900 [2024-10-08 18:25:24.215176] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:55.900 [2024-10-08 18:25:24.215187] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:55.900 [2024-10-08 18:25:24.215200] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:55.900 [2024-10-08 18:25:24.215210] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:55.900 [2024-10-08 18:25:24.215218] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:55.900 [2024-10-08 18:25:24.215225] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:55.900 [2024-10-08 18:25:24.215233] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:55.900 [2024-10-08 18:25:24.215240] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:55.900 [2024-10-08 18:25:24.215248] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:55.900 [2024-10-08 18:25:24.215275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:55.900 [2024-10-08 18:25:24.215288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:55.900 [2024-10-08 18:25:24.215305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:55.900 [2024-10-08 18:25:24.215316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:55.900 [2024-10-08 18:25:24.215331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:55.900 [2024-10-08 18:25:24.215347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:55.900 [2024-10-08 18:25:24.215363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:55.900 [2024-10-08 18:25:24.215373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:55.900 [2024-10-08 18:25:24.215395] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:55.900 [2024-10-08 18:25:24.215405] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:55.900 [2024-10-08 18:25:24.215414] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:55.900 [2024-10-08 18:25:24.215421] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:55.900 [2024-10-08 18:25:24.215426] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:55.900 [2024-10-08 18:25:24.215435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:55.900 [2024-10-08 18:25:24.215447] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:55.900 [2024-10-08 18:25:24.215455] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:55.900 [2024-10-08 18:25:24.215461] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.900 [2024-10-08 18:25:24.215469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:55.900 [2024-10-08 18:25:24.215480] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:55.900 [2024-10-08 18:25:24.215488] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:55.900 [2024-10-08 18:25:24.215494] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.900 [2024-10-08 18:25:24.215502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:55.900 [2024-10-08 18:25:24.215514] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:55.900 [2024-10-08 18:25:24.215521] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:55.900 [2024-10-08 18:25:24.215527] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.900 [2024-10-08 18:25:24.215536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:55.900 [2024-10-08 18:25:24.215547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:55.900 [2024-10-08 18:25:24.215566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:55.900 [2024-10-08 18:25:24.215583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:55.900 [2024-10-08 18:25:24.215595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:55.900 ===================================================== 00:14:55.900 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:55.900 ===================================================== 00:14:55.900 Controller Capabilities/Features 00:14:55.900 ================================ 00:14:55.900 Vendor ID: 4e58 00:14:55.900 Subsystem Vendor ID: 4e58 00:14:55.900 Serial Number: SPDK1 00:14:55.900 Model Number: SPDK bdev Controller 00:14:55.900 Firmware Version: 25.01 00:14:55.900 Recommended Arb Burst: 6 00:14:55.900 IEEE OUI Identifier: 8d 6b 50 00:14:55.900 Multi-path I/O 00:14:55.900 May have multiple subsystem ports: Yes 00:14:55.900 May have multiple controllers: Yes 00:14:55.900 Associated with SR-IOV VF: No 00:14:55.900 Max Data Transfer Size: 131072 00:14:55.900 Max Number of Namespaces: 32 00:14:55.900 Max Number of I/O Queues: 127 00:14:55.900 NVMe Specification Version (VS): 1.3 00:14:55.900 NVMe Specification Version (Identify): 1.3 00:14:55.900 Maximum Queue Entries: 256 00:14:55.900 Contiguous Queues Required: Yes 00:14:55.900 Arbitration Mechanisms Supported 00:14:55.900 Weighted Round Robin: Not Supported 00:14:55.900 Vendor Specific: Not Supported 00:14:55.900 Reset Timeout: 15000 ms 00:14:55.900 Doorbell Stride: 4 bytes 00:14:55.900 NVM Subsystem Reset: Not Supported 00:14:55.900 Command Sets Supported 00:14:55.900 NVM Command Set: Supported 00:14:55.900 Boot Partition: Not Supported 00:14:55.900 Memory Page Size Minimum: 4096 bytes 00:14:55.900 Memory Page Size Maximum: 4096 bytes 00:14:55.900 Persistent Memory Region: Not Supported 00:14:55.900 Optional Asynchronous Events Supported 00:14:55.900 Namespace Attribute Notices: Supported 00:14:55.900 Firmware Activation Notices: Not Supported 00:14:55.900 ANA Change Notices: Not Supported 00:14:55.900 PLE Aggregate Log Change Notices: Not Supported 00:14:55.900 LBA Status Info Alert Notices: Not Supported 00:14:55.900 EGE Aggregate Log Change Notices: Not Supported 00:14:55.900 Normal NVM Subsystem Shutdown event: Not Supported 00:14:55.900 Zone Descriptor Change Notices: Not Supported 00:14:55.900 Discovery Log Change Notices: Not Supported 00:14:55.900 Controller Attributes 00:14:55.900 128-bit Host Identifier: Supported 00:14:55.900 Non-Operational Permissive Mode: Not Supported 00:14:55.900 NVM Sets: Not Supported 00:14:55.900 Read Recovery Levels: Not Supported 00:14:55.900 Endurance Groups: Not Supported 00:14:55.900 Predictable Latency Mode: Not Supported 00:14:55.900 Traffic Based Keep ALive: Not Supported 00:14:55.900 Namespace Granularity: Not Supported 00:14:55.900 SQ Associations: Not Supported 00:14:55.900 UUID List: Not Supported 00:14:55.900 Multi-Domain Subsystem: Not Supported 00:14:55.900 Fixed Capacity Management: Not Supported 00:14:55.900 Variable Capacity Management: Not Supported 00:14:55.900 Delete Endurance Group: Not Supported 00:14:55.900 Delete NVM Set: Not Supported 00:14:55.900 Extended LBA Formats Supported: Not Supported 00:14:55.901 Flexible Data Placement Supported: Not Supported 00:14:55.901 00:14:55.901 Controller Memory Buffer Support 00:14:55.901 ================================ 00:14:55.901 Supported: No 00:14:55.901 00:14:55.901 Persistent Memory Region Support 00:14:55.901 ================================ 00:14:55.901 Supported: No 00:14:55.901 00:14:55.901 Admin Command Set Attributes 00:14:55.901 ============================ 00:14:55.901 Security Send/Receive: Not Supported 00:14:55.901 Format NVM: Not Supported 00:14:55.901 Firmware Activate/Download: Not Supported 00:14:55.901 Namespace Management: Not Supported 00:14:55.901 Device Self-Test: Not Supported 00:14:55.901 Directives: Not Supported 00:14:55.901 NVMe-MI: Not Supported 00:14:55.901 Virtualization Management: Not Supported 00:14:55.901 Doorbell Buffer Config: Not Supported 00:14:55.901 Get LBA Status Capability: Not Supported 00:14:55.901 Command & Feature Lockdown Capability: Not Supported 00:14:55.901 Abort Command Limit: 4 00:14:55.901 Async Event Request Limit: 4 00:14:55.901 Number of Firmware Slots: N/A 00:14:55.901 Firmware Slot 1 Read-Only: N/A 00:14:55.901 Firmware Activation Without Reset: N/A 00:14:55.901 Multiple Update Detection Support: N/A 00:14:55.901 Firmware Update Granularity: No Information Provided 00:14:55.901 Per-Namespace SMART Log: No 00:14:55.901 Asymmetric Namespace Access Log Page: Not Supported 00:14:55.901 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:55.901 Command Effects Log Page: Supported 00:14:55.901 Get Log Page Extended Data: Supported 00:14:55.901 Telemetry Log Pages: Not Supported 00:14:55.901 Persistent Event Log Pages: Not Supported 00:14:55.901 Supported Log Pages Log Page: May Support 00:14:55.901 Commands Supported & Effects Log Page: Not Supported 00:14:55.901 Feature Identifiers & Effects Log Page:May Support 00:14:55.901 NVMe-MI Commands & Effects Log Page: May Support 00:14:55.901 Data Area 4 for Telemetry Log: Not Supported 00:14:55.901 Error Log Page Entries Supported: 128 00:14:55.901 Keep Alive: Supported 00:14:55.901 Keep Alive Granularity: 10000 ms 00:14:55.901 00:14:55.901 NVM Command Set Attributes 00:14:55.901 ========================== 00:14:55.901 Submission Queue Entry Size 00:14:55.901 Max: 64 00:14:55.901 Min: 64 00:14:55.901 Completion Queue Entry Size 00:14:55.901 Max: 16 00:14:55.901 Min: 16 00:14:55.901 Number of Namespaces: 32 00:14:55.901 Compare Command: Supported 00:14:55.901 Write Uncorrectable Command: Not Supported 00:14:55.901 Dataset Management Command: Supported 00:14:55.901 Write Zeroes Command: Supported 00:14:55.901 Set Features Save Field: Not Supported 00:14:55.901 Reservations: Not Supported 00:14:55.901 Timestamp: Not Supported 00:14:55.901 Copy: Supported 00:14:55.901 Volatile Write Cache: Present 00:14:55.901 Atomic Write Unit (Normal): 1 00:14:55.901 Atomic Write Unit (PFail): 1 00:14:55.901 Atomic Compare & Write Unit: 1 00:14:55.901 Fused Compare & Write: Supported 00:14:55.901 Scatter-Gather List 00:14:55.901 SGL Command Set: Supported (Dword aligned) 00:14:55.901 SGL Keyed: Not Supported 00:14:55.901 SGL Bit Bucket Descriptor: Not Supported 00:14:55.901 SGL Metadata Pointer: Not Supported 00:14:55.901 Oversized SGL: Not Supported 00:14:55.901 SGL Metadata Address: Not Supported 00:14:55.901 SGL Offset: Not Supported 00:14:55.901 Transport SGL Data Block: Not Supported 00:14:55.901 Replay Protected Memory Block: Not Supported 00:14:55.901 00:14:55.901 Firmware Slot Information 00:14:55.901 ========================= 00:14:55.901 Active slot: 1 00:14:55.901 Slot 1 Firmware Revision: 25.01 00:14:55.901 00:14:55.901 00:14:55.901 Commands Supported and Effects 00:14:55.901 ============================== 00:14:55.901 Admin Commands 00:14:55.901 -------------- 00:14:55.901 Get Log Page (02h): Supported 00:14:55.901 Identify (06h): Supported 00:14:55.901 Abort (08h): Supported 00:14:55.901 Set Features (09h): Supported 00:14:55.901 Get Features (0Ah): Supported 00:14:55.901 Asynchronous Event Request (0Ch): Supported 00:14:55.901 Keep Alive (18h): Supported 00:14:55.901 I/O Commands 00:14:55.901 ------------ 00:14:55.901 Flush (00h): Supported LBA-Change 00:14:55.901 Write (01h): Supported LBA-Change 00:14:55.901 Read (02h): Supported 00:14:55.901 Compare (05h): Supported 00:14:55.901 Write Zeroes (08h): Supported LBA-Change 00:14:55.901 Dataset Management (09h): Supported LBA-Change 00:14:55.901 Copy (19h): Supported LBA-Change 00:14:55.901 00:14:55.901 Error Log 00:14:55.901 ========= 00:14:55.901 00:14:55.901 Arbitration 00:14:55.901 =========== 00:14:55.901 Arbitration Burst: 1 00:14:55.901 00:14:55.901 Power Management 00:14:55.901 ================ 00:14:55.901 Number of Power States: 1 00:14:55.901 Current Power State: Power State #0 00:14:55.901 Power State #0: 00:14:55.901 Max Power: 0.00 W 00:14:55.901 Non-Operational State: Operational 00:14:55.901 Entry Latency: Not Reported 00:14:55.901 Exit Latency: Not Reported 00:14:55.901 Relative Read Throughput: 0 00:14:55.901 Relative Read Latency: 0 00:14:55.901 Relative Write Throughput: 0 00:14:55.901 Relative Write Latency: 0 00:14:55.901 Idle Power: Not Reported 00:14:55.901 Active Power: Not Reported 00:14:55.901 Non-Operational Permissive Mode: Not Supported 00:14:55.901 00:14:55.901 Health Information 00:14:55.901 ================== 00:14:55.901 Critical Warnings: 00:14:55.901 Available Spare Space: OK 00:14:55.901 Temperature: OK 00:14:55.901 Device Reliability: OK 00:14:55.901 Read Only: No 00:14:55.901 Volatile Memory Backup: OK 00:14:55.901 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:55.901 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:55.901 Available Spare: 0% 00:14:55.901 Available Sp[2024-10-08 18:25:24.215777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:55.901 [2024-10-08 18:25:24.215795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:55.901 [2024-10-08 18:25:24.215839] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:55.901 [2024-10-08 18:25:24.215856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.901 [2024-10-08 18:25:24.215867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.901 [2024-10-08 18:25:24.215877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.901 [2024-10-08 18:25:24.215887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.901 [2024-10-08 18:25:24.218663] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:55.901 [2024-10-08 18:25:24.218690] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:55.901 [2024-10-08 18:25:24.219291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.901 [2024-10-08 18:25:24.219380] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:55.901 [2024-10-08 18:25:24.219393] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:55.901 [2024-10-08 18:25:24.220302] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:55.901 [2024-10-08 18:25:24.220325] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:55.901 [2024-10-08 18:25:24.220382] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:55.901 [2024-10-08 18:25:24.222345] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:55.901 are Threshold: 0% 00:14:55.901 Life Percentage Used: 0% 00:14:55.901 Data Units Read: 0 00:14:55.901 Data Units Written: 0 00:14:55.901 Host Read Commands: 0 00:14:55.901 Host Write Commands: 0 00:14:55.901 Controller Busy Time: 0 minutes 00:14:55.901 Power Cycles: 0 00:14:55.901 Power On Hours: 0 hours 00:14:55.901 Unsafe Shutdowns: 0 00:14:55.901 Unrecoverable Media Errors: 0 00:14:55.901 Lifetime Error Log Entries: 0 00:14:55.901 Warning Temperature Time: 0 minutes 00:14:55.901 Critical Temperature Time: 0 minutes 00:14:55.901 00:14:55.901 Number of Queues 00:14:55.901 ================ 00:14:55.901 Number of I/O Submission Queues: 127 00:14:55.901 Number of I/O Completion Queues: 127 00:14:55.901 00:14:55.901 Active Namespaces 00:14:55.901 ================= 00:14:55.901 Namespace ID:1 00:14:55.901 Error Recovery Timeout: Unlimited 00:14:55.901 Command Set Identifier: NVM (00h) 00:14:55.901 Deallocate: Supported 00:14:55.901 Deallocated/Unwritten Error: Not Supported 00:14:55.901 Deallocated Read Value: Unknown 00:14:55.901 Deallocate in Write Zeroes: Not Supported 00:14:55.901 Deallocated Guard Field: 0xFFFF 00:14:55.901 Flush: Supported 00:14:55.901 Reservation: Supported 00:14:55.901 Namespace Sharing Capabilities: Multiple Controllers 00:14:55.901 Size (in LBAs): 131072 (0GiB) 00:14:55.901 Capacity (in LBAs): 131072 (0GiB) 00:14:55.901 Utilization (in LBAs): 131072 (0GiB) 00:14:55.901 NGUID: 165DD091A25941EF9FCEF979697DA550 00:14:55.901 UUID: 165dd091-a259-41ef-9fce-f979697da550 00:14:55.902 Thin Provisioning: Not Supported 00:14:55.902 Per-NS Atomic Units: Yes 00:14:55.902 Atomic Boundary Size (Normal): 0 00:14:55.902 Atomic Boundary Size (PFail): 0 00:14:55.902 Atomic Boundary Offset: 0 00:14:55.902 Maximum Single Source Range Length: 65535 00:14:55.902 Maximum Copy Length: 65535 00:14:55.902 Maximum Source Range Count: 1 00:14:55.902 NGUID/EUI64 Never Reused: No 00:14:55.902 Namespace Write Protected: No 00:14:55.902 Number of LBA Formats: 1 00:14:55.902 Current LBA Format: LBA Format #00 00:14:55.902 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:55.902 00:14:55.902 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:56.159 [2024-10-08 18:25:24.483239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.430 Initializing NVMe Controllers 00:15:01.430 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:01.430 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:01.430 Initialization complete. Launching workers. 00:15:01.430 ======================================================== 00:15:01.430 Latency(us) 00:15:01.430 Device Information : IOPS MiB/s Average min max 00:15:01.430 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 31678.65 123.74 4039.60 1207.20 8253.86 00:15:01.430 ======================================================== 00:15:01.430 Total : 31678.65 123.74 4039.60 1207.20 8253.86 00:15:01.430 00:15:01.430 [2024-10-08 18:25:29.504692] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.430 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:01.430 [2024-10-08 18:25:29.805068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.704 Initializing NVMe Controllers 00:15:06.704 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.704 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:06.704 Initialization complete. Launching workers. 00:15:06.704 ======================================================== 00:15:06.704 Latency(us) 00:15:06.704 Device Information : IOPS MiB/s Average min max 00:15:06.704 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.58 62.60 7995.58 4977.26 14946.44 00:15:06.704 ======================================================== 00:15:06.704 Total : 16025.58 62.60 7995.58 4977.26 14946.44 00:15:06.704 00:15:06.704 [2024-10-08 18:25:34.841989] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.704 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:06.704 [2024-10-08 18:25:35.057066] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.977 [2024-10-08 18:25:40.126970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.977 Initializing NVMe Controllers 00:15:11.977 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:11.977 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:11.977 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:11.977 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:11.977 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:11.977 Initialization complete. Launching workers. 00:15:11.977 Starting thread on core 2 00:15:11.977 Starting thread on core 3 00:15:11.977 Starting thread on core 1 00:15:11.977 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:11.977 [2024-10-08 18:25:40.461181] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:15.265 [2024-10-08 18:25:43.523615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.265 Initializing NVMe Controllers 00:15:15.265 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.265 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.265 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:15.265 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:15.265 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:15.265 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:15.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:15.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:15.265 Initialization complete. Launching workers. 00:15:15.265 Starting thread on core 1 with urgent priority queue 00:15:15.265 Starting thread on core 2 with urgent priority queue 00:15:15.265 Starting thread on core 3 with urgent priority queue 00:15:15.265 Starting thread on core 0 with urgent priority queue 00:15:15.265 SPDK bdev Controller (SPDK1 ) core 0: 5290.00 IO/s 18.90 secs/100000 ios 00:15:15.265 SPDK bdev Controller (SPDK1 ) core 1: 5241.67 IO/s 19.08 secs/100000 ios 00:15:15.265 SPDK bdev Controller (SPDK1 ) core 2: 5663.67 IO/s 17.66 secs/100000 ios 00:15:15.265 SPDK bdev Controller (SPDK1 ) core 3: 5545.67 IO/s 18.03 secs/100000 ios 00:15:15.265 ======================================================== 00:15:15.265 00:15:15.265 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:15.523 [2024-10-08 18:25:43.841025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:15.523 Initializing NVMe Controllers 00:15:15.523 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.523 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.523 Namespace ID: 1 size: 0GB 00:15:15.523 Initialization complete. 00:15:15.523 INFO: using host memory buffer for IO 00:15:15.523 Hello world! 00:15:15.523 [2024-10-08 18:25:43.874615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.523 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:15.783 [2024-10-08 18:25:44.168572] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.720 Initializing NVMe Controllers 00:15:16.721 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.721 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.721 Initialization complete. Launching workers. 00:15:16.721 submit (in ns) avg, min, max = 6267.9, 3515.6, 4015687.8 00:15:16.721 complete (in ns) avg, min, max = 29407.1, 2068.9, 4015718.9 00:15:16.721 00:15:16.721 Submit histogram 00:15:16.721 ================ 00:15:16.721 Range in us Cumulative Count 00:15:16.721 3.508 - 3.532: 0.1989% ( 26) 00:15:16.721 3.532 - 3.556: 0.5890% ( 51) 00:15:16.721 3.556 - 3.579: 3.1594% ( 336) 00:15:16.721 3.579 - 3.603: 7.1374% ( 520) 00:15:16.721 3.603 - 3.627: 13.9229% ( 887) 00:15:16.721 3.627 - 3.650: 22.3378% ( 1100) 00:15:16.721 3.650 - 3.674: 31.1047% ( 1146) 00:15:16.721 3.674 - 3.698: 40.0168% ( 1165) 00:15:16.721 3.698 - 3.721: 47.3149% ( 954) 00:15:16.721 3.721 - 3.745: 52.6775% ( 701) 00:15:16.721 3.745 - 3.769: 57.0456% ( 571) 00:15:16.721 3.769 - 3.793: 61.3296% ( 560) 00:15:16.721 3.793 - 3.816: 64.8944% ( 466) 00:15:16.721 3.816 - 3.840: 68.5205% ( 474) 00:15:16.721 3.840 - 3.864: 72.3378% ( 499) 00:15:16.721 3.864 - 3.887: 76.2775% ( 515) 00:15:16.721 3.887 - 3.911: 80.5233% ( 555) 00:15:16.721 3.911 - 3.935: 83.7668% ( 424) 00:15:16.721 3.935 - 3.959: 86.3372% ( 336) 00:15:16.721 3.959 - 3.982: 88.0737% ( 227) 00:15:16.721 3.982 - 4.006: 89.7720% ( 222) 00:15:16.721 4.006 - 4.030: 91.2179% ( 189) 00:15:16.721 4.030 - 4.053: 92.4036% ( 155) 00:15:16.721 4.053 - 4.077: 93.5129% ( 145) 00:15:16.721 4.077 - 4.101: 94.2472% ( 96) 00:15:16.721 4.101 - 4.124: 94.9357% ( 90) 00:15:16.721 4.124 - 4.148: 95.4483% ( 67) 00:15:16.721 4.148 - 4.172: 95.8996% ( 59) 00:15:16.721 4.172 - 4.196: 96.2209% ( 42) 00:15:16.721 4.196 - 4.219: 96.3739% ( 20) 00:15:16.721 4.219 - 4.243: 96.4887% ( 15) 00:15:16.721 4.243 - 4.267: 96.6264% ( 18) 00:15:16.721 4.267 - 4.290: 96.7794% ( 20) 00:15:16.721 4.290 - 4.314: 96.8788% ( 13) 00:15:16.721 4.314 - 4.338: 96.9477% ( 9) 00:15:16.721 4.338 - 4.361: 97.0318% ( 11) 00:15:16.721 4.361 - 4.385: 97.0930% ( 8) 00:15:16.721 4.385 - 4.409: 97.1313% ( 5) 00:15:16.721 4.409 - 4.433: 97.1695% ( 5) 00:15:16.721 4.433 - 4.456: 97.1925% ( 3) 00:15:16.721 4.456 - 4.480: 97.2231% ( 4) 00:15:16.721 4.480 - 4.504: 97.2307% ( 1) 00:15:16.721 4.504 - 4.527: 97.2460% ( 2) 00:15:16.721 4.527 - 4.551: 97.2613% ( 2) 00:15:16.721 4.551 - 4.575: 97.2766% ( 2) 00:15:16.721 4.575 - 4.599: 97.2919% ( 2) 00:15:16.721 4.599 - 4.622: 97.3149% ( 3) 00:15:16.721 4.622 - 4.646: 97.3455% ( 4) 00:15:16.721 4.646 - 4.670: 97.3837% ( 5) 00:15:16.721 4.670 - 4.693: 97.4296% ( 6) 00:15:16.721 4.693 - 4.717: 97.4449% ( 2) 00:15:16.721 4.717 - 4.741: 97.4755% ( 4) 00:15:16.721 4.741 - 4.764: 97.5138% ( 5) 00:15:16.721 4.764 - 4.788: 97.5979% ( 11) 00:15:16.721 4.788 - 4.812: 97.6438% ( 6) 00:15:16.721 4.812 - 4.836: 97.6821% ( 5) 00:15:16.721 4.836 - 4.859: 97.6974% ( 2) 00:15:16.721 4.859 - 4.883: 97.7203% ( 3) 00:15:16.721 4.883 - 4.907: 97.7892% ( 9) 00:15:16.721 4.907 - 4.930: 97.8121% ( 3) 00:15:16.721 4.930 - 4.954: 97.8351% ( 3) 00:15:16.721 4.954 - 4.978: 97.8580% ( 3) 00:15:16.721 4.978 - 5.001: 97.8733% ( 2) 00:15:16.721 5.001 - 5.025: 97.9116% ( 5) 00:15:16.721 5.025 - 5.049: 97.9269% ( 2) 00:15:16.721 5.049 - 5.073: 97.9651% ( 5) 00:15:16.721 5.073 - 5.096: 97.9881% ( 3) 00:15:16.721 5.096 - 5.120: 98.0110% ( 3) 00:15:16.721 5.120 - 5.144: 98.0340% ( 3) 00:15:16.721 5.144 - 5.167: 98.0493% ( 2) 00:15:16.721 5.167 - 5.191: 98.0646% ( 2) 00:15:16.721 5.191 - 5.215: 98.0799% ( 2) 00:15:16.721 5.215 - 5.239: 98.0875% ( 1) 00:15:16.721 5.239 - 5.262: 98.0952% ( 1) 00:15:16.721 5.262 - 5.286: 98.1181% ( 3) 00:15:16.721 5.333 - 5.357: 98.1258% ( 1) 00:15:16.721 5.357 - 5.381: 98.1334% ( 1) 00:15:16.721 5.381 - 5.404: 98.1487% ( 2) 00:15:16.721 5.452 - 5.476: 98.1564% ( 1) 00:15:16.721 5.476 - 5.499: 98.1640% ( 1) 00:15:16.721 5.523 - 5.547: 98.1717% ( 1) 00:15:16.721 5.547 - 5.570: 98.1793% ( 1) 00:15:16.721 5.855 - 5.879: 98.1870% ( 1) 00:15:16.721 5.879 - 5.902: 98.1946% ( 1) 00:15:16.721 6.044 - 6.068: 98.2023% ( 1) 00:15:16.721 6.163 - 6.210: 98.2099% ( 1) 00:15:16.721 6.305 - 6.353: 98.2176% ( 1) 00:15:16.721 6.400 - 6.447: 98.2252% ( 1) 00:15:16.721 6.447 - 6.495: 98.2329% ( 1) 00:15:16.721 6.495 - 6.542: 98.2405% ( 1) 00:15:16.721 6.542 - 6.590: 98.2482% ( 1) 00:15:16.721 6.590 - 6.637: 98.2558% ( 1) 00:15:16.721 6.732 - 6.779: 98.2711% ( 2) 00:15:16.721 6.779 - 6.827: 98.2788% ( 1) 00:15:16.721 6.827 - 6.874: 98.2864% ( 1) 00:15:16.721 6.921 - 6.969: 98.3017% ( 2) 00:15:16.721 6.969 - 7.016: 98.3094% ( 1) 00:15:16.721 7.016 - 7.064: 98.3247% ( 2) 00:15:16.721 7.111 - 7.159: 98.3323% ( 1) 00:15:16.721 7.159 - 7.206: 98.3553% ( 3) 00:15:16.721 7.253 - 7.301: 98.3629% ( 1) 00:15:16.721 7.301 - 7.348: 98.3706% ( 1) 00:15:16.721 7.396 - 7.443: 98.3859% ( 2) 00:15:16.721 7.490 - 7.538: 98.4088% ( 3) 00:15:16.721 7.538 - 7.585: 98.4165% ( 1) 00:15:16.721 7.585 - 7.633: 98.4318% ( 2) 00:15:16.721 7.633 - 7.680: 98.4394% ( 1) 00:15:16.721 7.680 - 7.727: 98.4471% ( 1) 00:15:16.721 7.727 - 7.775: 98.4547% ( 1) 00:15:16.721 7.775 - 7.822: 98.4624% ( 1) 00:15:16.721 7.822 - 7.870: 98.4700% ( 1) 00:15:16.721 7.870 - 7.917: 98.4777% ( 1) 00:15:16.721 7.917 - 7.964: 98.4853% ( 1) 00:15:16.721 7.964 - 8.012: 98.4930% ( 1) 00:15:16.721 8.012 - 8.059: 98.5083% ( 2) 00:15:16.721 8.154 - 8.201: 98.5159% ( 1) 00:15:16.721 8.201 - 8.249: 98.5236% ( 1) 00:15:16.721 8.249 - 8.296: 98.5312% ( 1) 00:15:16.721 8.296 - 8.344: 98.5465% ( 2) 00:15:16.721 8.439 - 8.486: 98.5695% ( 3) 00:15:16.721 8.581 - 8.628: 98.5848% ( 2) 00:15:16.721 8.628 - 8.676: 98.6001% ( 2) 00:15:16.721 8.676 - 8.723: 98.6077% ( 1) 00:15:16.721 8.770 - 8.818: 98.6307% ( 3) 00:15:16.721 8.865 - 8.913: 98.6460% ( 2) 00:15:16.721 8.913 - 8.960: 98.6536% ( 1) 00:15:16.721 8.960 - 9.007: 98.6613% ( 1) 00:15:16.721 9.055 - 9.102: 98.6689% ( 1) 00:15:16.721 9.339 - 9.387: 98.6919% ( 3) 00:15:16.721 9.387 - 9.434: 98.6995% ( 1) 00:15:16.721 9.434 - 9.481: 98.7225% ( 3) 00:15:16.721 9.481 - 9.529: 98.7301% ( 1) 00:15:16.721 9.529 - 9.576: 98.7378% ( 1) 00:15:16.721 9.671 - 9.719: 98.7454% ( 1) 00:15:16.721 9.719 - 9.766: 98.7531% ( 1) 00:15:16.721 9.766 - 9.813: 98.7607% ( 1) 00:15:16.721 10.050 - 10.098: 98.7684% ( 1) 00:15:16.721 10.098 - 10.145: 98.7760% ( 1) 00:15:16.721 10.145 - 10.193: 98.7837% ( 1) 00:15:16.721 10.193 - 10.240: 98.7913% ( 1) 00:15:16.721 10.382 - 10.430: 98.7990% ( 1) 00:15:16.721 10.524 - 10.572: 98.8066% ( 1) 00:15:16.721 10.572 - 10.619: 98.8219% ( 2) 00:15:16.721 10.619 - 10.667: 98.8296% ( 1) 00:15:16.721 10.809 - 10.856: 98.8372% ( 1) 00:15:16.721 10.856 - 10.904: 98.8449% ( 1) 00:15:16.721 10.904 - 10.951: 98.8525% ( 1) 00:15:16.721 11.093 - 11.141: 98.8602% ( 1) 00:15:16.721 11.188 - 11.236: 98.8678% ( 1) 00:15:16.721 11.330 - 11.378: 98.8755% ( 1) 00:15:16.721 11.378 - 11.425: 98.8831% ( 1) 00:15:16.721 11.520 - 11.567: 98.8908% ( 1) 00:15:16.721 11.947 - 11.994: 98.8984% ( 1) 00:15:16.721 12.089 - 12.136: 98.9061% ( 1) 00:15:16.721 12.326 - 12.421: 98.9137% ( 1) 00:15:16.721 12.421 - 12.516: 98.9214% ( 1) 00:15:16.721 12.516 - 12.610: 98.9367% ( 2) 00:15:16.721 12.610 - 12.705: 98.9520% ( 2) 00:15:16.721 12.895 - 12.990: 98.9596% ( 1) 00:15:16.721 13.274 - 13.369: 98.9826% ( 3) 00:15:16.721 13.369 - 13.464: 98.9979% ( 2) 00:15:16.721 13.559 - 13.653: 99.0055% ( 1) 00:15:16.721 13.653 - 13.748: 99.0132% ( 1) 00:15:16.721 13.843 - 13.938: 99.0208% ( 1) 00:15:16.721 13.938 - 14.033: 99.0285% ( 1) 00:15:16.721 14.033 - 14.127: 99.0438% ( 2) 00:15:16.721 14.127 - 14.222: 99.0514% ( 1) 00:15:16.721 14.222 - 14.317: 99.0591% ( 1) 00:15:16.721 14.507 - 14.601: 99.0667% ( 1) 00:15:16.721 14.601 - 14.696: 99.0744% ( 1) 00:15:16.721 14.696 - 14.791: 99.1050% ( 4) 00:15:16.721 15.076 - 15.170: 99.1126% ( 1) 00:15:16.721 16.782 - 16.877: 99.1203% ( 1) 00:15:16.721 17.161 - 17.256: 99.1356% ( 2) 00:15:16.721 17.256 - 17.351: 99.1432% ( 1) 00:15:16.721 17.446 - 17.541: 99.1662% ( 3) 00:15:16.721 17.541 - 17.636: 99.1815% ( 2) 00:15:16.721 17.636 - 17.730: 99.2350% ( 7) 00:15:16.721 17.730 - 17.825: 99.3039% ( 9) 00:15:16.722 17.825 - 17.920: 99.3345% ( 4) 00:15:16.722 17.920 - 18.015: 99.3727% ( 5) 00:15:16.722 18.015 - 18.110: 99.4033% ( 4) 00:15:16.722 18.110 - 18.204: 99.4798% ( 10) 00:15:16.722 18.204 - 18.299: 99.5410% ( 8) 00:15:16.722 18.299 - 18.394: 99.5640% ( 3) 00:15:16.722 18.394 - 18.489: 99.6252% ( 8) 00:15:16.722 18.489 - 18.584: 99.6864% ( 8) 00:15:16.722 18.584 - 18.679: 99.7093% ( 3) 00:15:16.722 18.679 - 18.773: 99.7552% ( 6) 00:15:16.722 18.773 - 18.868: 99.7782% ( 3) 00:15:16.722 18.868 - 18.963: 99.8011% ( 3) 00:15:16.722 18.963 - 19.058: 99.8088% ( 1) 00:15:16.722 19.058 - 19.153: 99.8241% ( 2) 00:15:16.722 19.153 - 19.247: 99.8317% ( 1) 00:15:16.722 19.247 - 19.342: 99.8394% ( 1) 00:15:16.722 19.342 - 19.437: 99.8623% ( 3) 00:15:16.722 19.437 - 19.532: 99.8700% ( 1) 00:15:16.722 19.816 - 19.911: 99.8776% ( 1) 00:15:16.722 19.911 - 20.006: 99.8853% ( 1) 00:15:16.722 21.144 - 21.239: 99.8929% ( 1) 00:15:16.722 21.618 - 21.713: 99.9006% ( 1) 00:15:16.722 22.281 - 22.376: 99.9082% ( 1) 00:15:16.722 23.893 - 23.988: 99.9159% ( 1) 00:15:16.722 24.273 - 24.462: 99.9235% ( 1) 00:15:16.722 25.031 - 25.221: 99.9312% ( 1) 00:15:16.722 28.065 - 28.255: 99.9388% ( 1) 00:15:16.722 2196.670 - 2208.806: 99.9465% ( 1) 00:15:16.722 3980.705 - 4004.978: 99.9847% ( 5) 00:15:16.722 4004.978 - 4029.250: 100.0000% ( 2) 00:15:16.722 00:15:16.722 Complete histogram 00:15:16.722 ================== 00:15:16.722 Range in us Cumulative Count 00:15:16.722 2.062 - 2.074: 0.3442% ( 45) 00:15:16.722 2.074 - 2.086: 9.6236% ( 1213) 00:15:16.722 2.086 - 2.098: 22.2231% ( 1647) 00:15:16.722 2.098 - 2.110: 31.3724% ( 1196) 00:15:16.722 2.110 - 2.121: 40.5064% ( 1194) 00:15:16.722 2.121 - 2.133: 52.3485% ( 1548) 00:15:16.722 2.133 - 2.145: 57.5122% ( 675) 00:15:16.722 2.145 - 2.157: 63.3874% ( 768) 00:15:16.722 2.157 - 2.169: 69.4844% ( 797) 00:15:16.722 2.169 - 2.181: 72.8045% ( 434) 00:15:16.722 2.181 - 2.193: 76.5606% ( 491) 00:15:16.722 2.193 - 2.204: 81.8084% ( 686) 00:15:16.722 2.204 - 2.216: 83.6291% ( 238) 00:15:16.722 2.216 - 2.228: 85.3351% ( 223) 00:15:16.722 2.228 - 2.240: 87.4618% ( 278) 00:15:16.722 2.240 - 2.252: 89.4737% ( 263) 00:15:16.722 2.252 - 2.264: 91.0955% ( 212) 00:15:16.722 2.264 - 2.276: 92.8091% ( 224) 00:15:16.722 2.276 - 2.287: 93.6965% ( 116) 00:15:16.722 2.287 - 2.299: 94.1554% ( 60) 00:15:16.722 2.299 - 2.311: 94.4232% ( 35) 00:15:16.722 2.311 - 2.323: 94.8745% ( 59) 00:15:16.722 2.323 - 2.335: 95.2341% ( 47) 00:15:16.722 2.335 - 2.347: 95.3641% ( 17) 00:15:16.722 2.347 - 2.359: 95.4483% ( 11) 00:15:16.722 2.359 - 2.370: 95.4865% ( 5) 00:15:16.722 2.370 - 2.382: 95.5860% ( 13) 00:15:16.722 2.382 - 2.394: 95.7619% ( 23) 00:15:16.722 2.394 - 2.406: 96.1750% ( 54) 00:15:16.722 2.406 - 2.418: 96.6876% ( 67) 00:15:16.722 2.418 - 2.430: 97.0854% ( 52) 00:15:16.722 2.430 - 2.441: 97.3608% ( 36) 00:15:16.722 2.441 - 2.453: 97.5367% ( 23) 00:15:16.722 2.453 - 2.465: 97.7586% ( 29) 00:15:16.722 2.465 - 2.477: 97.9422% ( 24) 00:15:16.722 2.477 - 2.489: 98.0569% ( 15) 00:15:16.722 2.489 - 2.501: 98.1334% ( 10) 00:15:16.722 2.501 - 2.513: 98.2099% ( 10) 00:15:16.722 2.513 - 2.524: 98.2329% ( 3) 00:15:16.722 2.524 - 2.536: 98.2788% ( 6) 00:15:16.722 2.536 - 2.548: 98.3017% ( 3) 00:15:16.722 2.548 - 2.560: 98.3094% ( 1) 00:15:16.722 2.560 - 2.572: 9[2024-10-08 18:25:45.194784] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.722 8.3323% ( 3) 00:15:16.722 2.572 - 2.584: 98.3629% ( 4) 00:15:16.722 2.584 - 2.596: 98.3782% ( 2) 00:15:16.722 2.619 - 2.631: 98.3935% ( 2) 00:15:16.722 2.643 - 2.655: 98.4012% ( 1) 00:15:16.722 2.667 - 2.679: 98.4088% ( 1) 00:15:16.722 2.702 - 2.714: 98.4241% ( 2) 00:15:16.722 2.714 - 2.726: 98.4394% ( 2) 00:15:16.722 2.726 - 2.738: 98.4471% ( 1) 00:15:16.722 2.856 - 2.868: 98.4624% ( 2) 00:15:16.722 2.880 - 2.892: 98.4700% ( 1) 00:15:16.722 2.904 - 2.916: 98.4853% ( 2) 00:15:16.722 2.927 - 2.939: 98.4930% ( 1) 00:15:16.722 2.975 - 2.987: 98.5006% ( 1) 00:15:16.722 3.271 - 3.295: 98.5083% ( 1) 00:15:16.722 3.319 - 3.342: 98.5159% ( 1) 00:15:16.722 3.342 - 3.366: 98.5236% ( 1) 00:15:16.722 3.366 - 3.390: 98.5312% ( 1) 00:15:16.722 3.390 - 3.413: 98.5389% ( 1) 00:15:16.722 3.413 - 3.437: 98.5465% ( 1) 00:15:16.722 3.437 - 3.461: 98.5618% ( 2) 00:15:16.722 3.461 - 3.484: 98.5695% ( 1) 00:15:16.722 3.508 - 3.532: 98.5771% ( 1) 00:15:16.722 3.556 - 3.579: 98.5848% ( 1) 00:15:16.722 3.603 - 3.627: 98.6001% ( 2) 00:15:16.722 3.674 - 3.698: 98.6077% ( 1) 00:15:16.722 3.721 - 3.745: 98.6154% ( 1) 00:15:16.722 3.745 - 3.769: 98.6230% ( 1) 00:15:16.722 3.816 - 3.840: 98.6307% ( 1) 00:15:16.722 3.911 - 3.935: 98.6460% ( 2) 00:15:16.722 4.243 - 4.267: 98.6536% ( 1) 00:15:16.722 4.930 - 4.954: 98.6613% ( 1) 00:15:16.722 5.167 - 5.191: 98.6689% ( 1) 00:15:16.722 5.239 - 5.262: 98.6842% ( 2) 00:15:16.722 5.310 - 5.333: 98.6995% ( 2) 00:15:16.722 5.428 - 5.452: 98.7148% ( 2) 00:15:16.722 5.476 - 5.499: 98.7225% ( 1) 00:15:16.722 5.902 - 5.926: 98.7378% ( 2) 00:15:16.722 6.353 - 6.400: 98.7454% ( 1) 00:15:16.722 6.447 - 6.495: 98.7531% ( 1) 00:15:16.722 6.542 - 6.590: 98.7607% ( 1) 00:15:16.722 6.732 - 6.779: 98.7684% ( 1) 00:15:16.722 6.779 - 6.827: 98.7760% ( 1) 00:15:16.722 6.827 - 6.874: 98.7837% ( 1) 00:15:16.722 7.159 - 7.206: 98.7913% ( 1) 00:15:16.722 7.633 - 7.680: 98.7990% ( 1) 00:15:16.722 8.344 - 8.391: 98.8143% ( 2) 00:15:16.722 8.628 - 8.676: 98.8219% ( 1) 00:15:16.722 9.102 - 9.150: 98.8296% ( 1) 00:15:16.722 9.434 - 9.481: 98.8372% ( 1) 00:15:16.722 15.455 - 15.550: 98.8525% ( 2) 00:15:16.722 15.739 - 15.834: 98.8602% ( 1) 00:15:16.722 15.929 - 16.024: 98.8984% ( 5) 00:15:16.722 16.024 - 16.119: 98.9290% ( 4) 00:15:16.722 16.119 - 16.213: 98.9596% ( 4) 00:15:16.722 16.213 - 16.308: 98.9902% ( 4) 00:15:16.722 16.308 - 16.403: 98.9979% ( 1) 00:15:16.722 16.403 - 16.498: 99.0285% ( 4) 00:15:16.722 16.498 - 16.593: 99.0820% ( 7) 00:15:16.722 16.593 - 16.687: 99.0897% ( 1) 00:15:16.722 16.687 - 16.782: 99.1203% ( 4) 00:15:16.722 16.782 - 16.877: 99.1585% ( 5) 00:15:16.722 16.877 - 16.972: 99.1738% ( 2) 00:15:16.722 16.972 - 17.067: 99.1891% ( 2) 00:15:16.722 17.067 - 17.161: 99.1968% ( 1) 00:15:16.722 17.161 - 17.256: 99.2121% ( 2) 00:15:16.722 17.351 - 17.446: 99.2197% ( 1) 00:15:16.722 17.446 - 17.541: 99.2427% ( 3) 00:15:16.722 17.636 - 17.730: 99.2580% ( 2) 00:15:16.722 17.730 - 17.825: 99.2656% ( 1) 00:15:16.722 17.825 - 17.920: 99.2733% ( 1) 00:15:16.722 18.015 - 18.110: 99.2809% ( 1) 00:15:16.722 18.110 - 18.204: 99.2886% ( 1) 00:15:16.722 18.489 - 18.584: 99.3039% ( 2) 00:15:16.722 18.584 - 18.679: 99.3115% ( 1) 00:15:16.722 27.117 - 27.307: 99.3192% ( 1) 00:15:16.722 3131.164 - 3155.437: 99.3268% ( 1) 00:15:16.722 3980.705 - 4004.978: 99.8241% ( 65) 00:15:16.722 4004.978 - 4029.250: 100.0000% ( 23) 00:15:16.722 00:15:16.722 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:16.722 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:16.722 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:16.722 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:16.722 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:17.291 [ 00:15:17.291 { 00:15:17.291 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:17.291 "subtype": "Discovery", 00:15:17.291 "listen_addresses": [], 00:15:17.291 "allow_any_host": true, 00:15:17.291 "hosts": [] 00:15:17.291 }, 00:15:17.291 { 00:15:17.291 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:17.291 "subtype": "NVMe", 00:15:17.291 "listen_addresses": [ 00:15:17.291 { 00:15:17.291 "trtype": "VFIOUSER", 00:15:17.291 "adrfam": "IPv4", 00:15:17.291 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:17.291 "trsvcid": "0" 00:15:17.291 } 00:15:17.291 ], 00:15:17.291 "allow_any_host": true, 00:15:17.291 "hosts": [], 00:15:17.291 "serial_number": "SPDK1", 00:15:17.291 "model_number": "SPDK bdev Controller", 00:15:17.291 "max_namespaces": 32, 00:15:17.291 "min_cntlid": 1, 00:15:17.291 "max_cntlid": 65519, 00:15:17.291 "namespaces": [ 00:15:17.291 { 00:15:17.291 "nsid": 1, 00:15:17.291 "bdev_name": "Malloc1", 00:15:17.291 "name": "Malloc1", 00:15:17.291 "nguid": "165DD091A25941EF9FCEF979697DA550", 00:15:17.291 "uuid": "165dd091-a259-41ef-9fce-f979697da550" 00:15:17.291 } 00:15:17.291 ] 00:15:17.291 }, 00:15:17.291 { 00:15:17.291 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:17.291 "subtype": "NVMe", 00:15:17.291 "listen_addresses": [ 00:15:17.291 { 00:15:17.291 "trtype": "VFIOUSER", 00:15:17.291 "adrfam": "IPv4", 00:15:17.291 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:17.291 "trsvcid": "0" 00:15:17.291 } 00:15:17.291 ], 00:15:17.291 "allow_any_host": true, 00:15:17.291 "hosts": [], 00:15:17.291 "serial_number": "SPDK2", 00:15:17.291 "model_number": "SPDK bdev Controller", 00:15:17.291 "max_namespaces": 32, 00:15:17.291 "min_cntlid": 1, 00:15:17.291 "max_cntlid": 65519, 00:15:17.291 "namespaces": [ 00:15:17.291 { 00:15:17.291 "nsid": 1, 00:15:17.291 "bdev_name": "Malloc2", 00:15:17.291 "name": "Malloc2", 00:15:17.291 "nguid": "AD119DCB988B4AA6A00A8A8822DB3B8D", 00:15:17.291 "uuid": "ad119dcb-988b-4aa6-a00a-8a8822db3b8d" 00:15:17.291 } 00:15:17.291 ] 00:15:17.291 } 00:15:17.291 ] 00:15:17.291 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:17.291 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1175607 00:15:17.291 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:17.291 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:17.291 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:17.291 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:17.291 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:17.291 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:17.291 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:17.291 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:17.291 [2024-10-08 18:25:45.725191] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.858 Malloc3 00:15:17.858 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:18.116 [2024-10-08 18:25:46.445527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.116 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:18.116 Asynchronous Event Request test 00:15:18.116 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.116 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.116 Registering asynchronous event callbacks... 00:15:18.116 Starting namespace attribute notice tests for all controllers... 00:15:18.116 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:18.116 aer_cb - Changed Namespace 00:15:18.116 Cleaning up... 00:15:18.374 [ 00:15:18.374 { 00:15:18.374 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:18.374 "subtype": "Discovery", 00:15:18.374 "listen_addresses": [], 00:15:18.374 "allow_any_host": true, 00:15:18.374 "hosts": [] 00:15:18.374 }, 00:15:18.374 { 00:15:18.374 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:18.374 "subtype": "NVMe", 00:15:18.374 "listen_addresses": [ 00:15:18.374 { 00:15:18.374 "trtype": "VFIOUSER", 00:15:18.374 "adrfam": "IPv4", 00:15:18.374 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:18.374 "trsvcid": "0" 00:15:18.374 } 00:15:18.374 ], 00:15:18.374 "allow_any_host": true, 00:15:18.374 "hosts": [], 00:15:18.374 "serial_number": "SPDK1", 00:15:18.374 "model_number": "SPDK bdev Controller", 00:15:18.374 "max_namespaces": 32, 00:15:18.374 "min_cntlid": 1, 00:15:18.374 "max_cntlid": 65519, 00:15:18.374 "namespaces": [ 00:15:18.374 { 00:15:18.374 "nsid": 1, 00:15:18.374 "bdev_name": "Malloc1", 00:15:18.374 "name": "Malloc1", 00:15:18.374 "nguid": "165DD091A25941EF9FCEF979697DA550", 00:15:18.374 "uuid": "165dd091-a259-41ef-9fce-f979697da550" 00:15:18.374 }, 00:15:18.374 { 00:15:18.374 "nsid": 2, 00:15:18.374 "bdev_name": "Malloc3", 00:15:18.374 "name": "Malloc3", 00:15:18.374 "nguid": "09A25FB809404B2B9B763594F411A01A", 00:15:18.374 "uuid": "09a25fb8-0940-4b2b-9b76-3594f411a01a" 00:15:18.374 } 00:15:18.374 ] 00:15:18.374 }, 00:15:18.374 { 00:15:18.374 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:18.374 "subtype": "NVMe", 00:15:18.374 "listen_addresses": [ 00:15:18.374 { 00:15:18.374 "trtype": "VFIOUSER", 00:15:18.374 "adrfam": "IPv4", 00:15:18.374 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:18.374 "trsvcid": "0" 00:15:18.374 } 00:15:18.374 ], 00:15:18.374 "allow_any_host": true, 00:15:18.374 "hosts": [], 00:15:18.374 "serial_number": "SPDK2", 00:15:18.374 "model_number": "SPDK bdev Controller", 00:15:18.374 "max_namespaces": 32, 00:15:18.374 "min_cntlid": 1, 00:15:18.374 "max_cntlid": 65519, 00:15:18.375 "namespaces": [ 00:15:18.375 { 00:15:18.375 "nsid": 1, 00:15:18.375 "bdev_name": "Malloc2", 00:15:18.375 "name": "Malloc2", 00:15:18.375 "nguid": "AD119DCB988B4AA6A00A8A8822DB3B8D", 00:15:18.375 "uuid": "ad119dcb-988b-4aa6-a00a-8a8822db3b8d" 00:15:18.375 } 00:15:18.375 ] 00:15:18.375 } 00:15:18.375 ] 00:15:18.375 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1175607 00:15:18.375 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:18.375 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:18.375 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:18.375 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:18.375 [2024-10-08 18:25:46.838361] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:15:18.375 [2024-10-08 18:25:46.838407] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1175746 ] 00:15:18.375 [2024-10-08 18:25:46.870579] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:18.375 [2024-10-08 18:25:46.884113] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:18.375 [2024-10-08 18:25:46.884148] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4c83d91000 00:15:18.375 [2024-10-08 18:25:46.885110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.375 [2024-10-08 18:25:46.886118] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.375 [2024-10-08 18:25:46.887125] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.375 [2024-10-08 18:25:46.888133] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.375 [2024-10-08 18:25:46.889137] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.375 [2024-10-08 18:25:46.890138] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.375 [2024-10-08 18:25:46.891150] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.375 [2024-10-08 18:25:46.892154] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.375 [2024-10-08 18:25:46.893171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:18.375 [2024-10-08 18:25:46.893193] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4c83d86000 00:15:18.375 [2024-10-08 18:25:46.894309] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:18.375 [2024-10-08 18:25:46.909077] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:18.375 [2024-10-08 18:25:46.909115] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:18.375 [2024-10-08 18:25:46.911198] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:18.375 [2024-10-08 18:25:46.911254] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:18.375 [2024-10-08 18:25:46.911347] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:18.375 [2024-10-08 18:25:46.911376] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:18.375 [2024-10-08 18:25:46.911387] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:18.635 [2024-10-08 18:25:46.912199] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:18.635 [2024-10-08 18:25:46.912222] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:18.635 [2024-10-08 18:25:46.912235] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:18.635 [2024-10-08 18:25:46.913201] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:18.636 [2024-10-08 18:25:46.913221] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:18.636 [2024-10-08 18:25:46.913234] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:18.636 [2024-10-08 18:25:46.914210] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:18.636 [2024-10-08 18:25:46.914233] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:18.636 [2024-10-08 18:25:46.915219] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:18.636 [2024-10-08 18:25:46.915240] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:18.636 [2024-10-08 18:25:46.915249] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:18.636 [2024-10-08 18:25:46.915261] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:18.636 [2024-10-08 18:25:46.915370] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:18.636 [2024-10-08 18:25:46.915378] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:18.636 [2024-10-08 18:25:46.915386] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:18.636 [2024-10-08 18:25:46.916229] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:18.636 [2024-10-08 18:25:46.917233] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:18.636 [2024-10-08 18:25:46.918245] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:18.636 [2024-10-08 18:25:46.919245] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:18.636 [2024-10-08 18:25:46.919326] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:18.636 [2024-10-08 18:25:46.920259] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:18.636 [2024-10-08 18:25:46.920279] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:18.636 [2024-10-08 18:25:46.920293] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.920317] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:18.636 [2024-10-08 18:25:46.920333] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.920353] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.636 [2024-10-08 18:25:46.920363] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.636 [2024-10-08 18:25:46.920369] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.636 [2024-10-08 18:25:46.920385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.636 [2024-10-08 18:25:46.926678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:18.636 [2024-10-08 18:25:46.926701] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:18.636 [2024-10-08 18:25:46.926710] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:18.636 [2024-10-08 18:25:46.926717] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:18.636 [2024-10-08 18:25:46.926724] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:18.636 [2024-10-08 18:25:46.926732] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:18.636 [2024-10-08 18:25:46.926740] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:18.636 [2024-10-08 18:25:46.926748] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.926764] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.926781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:18.636 [2024-10-08 18:25:46.934661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:18.636 [2024-10-08 18:25:46.934685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.636 [2024-10-08 18:25:46.934699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.636 [2024-10-08 18:25:46.934710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.636 [2024-10-08 18:25:46.934722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.636 [2024-10-08 18:25:46.934731] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.934747] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.934762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:18.636 [2024-10-08 18:25:46.942661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:18.636 [2024-10-08 18:25:46.942679] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:18.636 [2024-10-08 18:25:46.942688] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.942700] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.942714] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.942730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:18.636 [2024-10-08 18:25:46.950659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:18.636 [2024-10-08 18:25:46.950730] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.950746] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.950759] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:18.636 [2024-10-08 18:25:46.950768] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:18.636 [2024-10-08 18:25:46.950774] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.636 [2024-10-08 18:25:46.950784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:18.636 [2024-10-08 18:25:46.958661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:18.636 [2024-10-08 18:25:46.958684] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:18.636 [2024-10-08 18:25:46.958701] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.958716] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.958729] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.636 [2024-10-08 18:25:46.958737] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.636 [2024-10-08 18:25:46.958743] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.636 [2024-10-08 18:25:46.958753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.636 [2024-10-08 18:25:46.966663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:18.636 [2024-10-08 18:25:46.966691] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.966707] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.966721] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.636 [2024-10-08 18:25:46.966729] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.636 [2024-10-08 18:25:46.966739] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.636 [2024-10-08 18:25:46.966750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.636 [2024-10-08 18:25:46.974660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:18.636 [2024-10-08 18:25:46.974682] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.974695] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.974709] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.974719] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.974727] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.974735] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.974743] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:18.636 [2024-10-08 18:25:46.974751] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:18.636 [2024-10-08 18:25:46.974759] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:18.636 [2024-10-08 18:25:46.974783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:18.636 [2024-10-08 18:25:46.982663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:18.637 [2024-10-08 18:25:46.982689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:18.637 [2024-10-08 18:25:46.990660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:18.637 [2024-10-08 18:25:46.990686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:18.637 [2024-10-08 18:25:46.998675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:18.637 [2024-10-08 18:25:46.998701] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:18.637 [2024-10-08 18:25:47.006662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:18.637 [2024-10-08 18:25:47.006694] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:18.637 [2024-10-08 18:25:47.006705] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:18.637 [2024-10-08 18:25:47.006712] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:18.637 [2024-10-08 18:25:47.006718] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:18.637 [2024-10-08 18:25:47.006724] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:18.637 [2024-10-08 18:25:47.006734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:18.637 [2024-10-08 18:25:47.006752] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:18.637 [2024-10-08 18:25:47.006762] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:18.637 [2024-10-08 18:25:47.006768] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.637 [2024-10-08 18:25:47.006777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:18.637 [2024-10-08 18:25:47.006789] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:18.637 [2024-10-08 18:25:47.006797] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.637 [2024-10-08 18:25:47.006803] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.637 [2024-10-08 18:25:47.006812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.637 [2024-10-08 18:25:47.006824] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:18.637 [2024-10-08 18:25:47.006832] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:18.637 [2024-10-08 18:25:47.006838] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.637 [2024-10-08 18:25:47.006847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:18.637 [2024-10-08 18:25:47.014663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:18.637 [2024-10-08 18:25:47.014701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:18.637 [2024-10-08 18:25:47.014719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:18.637 [2024-10-08 18:25:47.014732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:18.637 ===================================================== 00:15:18.637 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:18.637 ===================================================== 00:15:18.637 Controller Capabilities/Features 00:15:18.637 ================================ 00:15:18.637 Vendor ID: 4e58 00:15:18.637 Subsystem Vendor ID: 4e58 00:15:18.637 Serial Number: SPDK2 00:15:18.637 Model Number: SPDK bdev Controller 00:15:18.637 Firmware Version: 25.01 00:15:18.637 Recommended Arb Burst: 6 00:15:18.637 IEEE OUI Identifier: 8d 6b 50 00:15:18.637 Multi-path I/O 00:15:18.637 May have multiple subsystem ports: Yes 00:15:18.637 May have multiple controllers: Yes 00:15:18.637 Associated with SR-IOV VF: No 00:15:18.637 Max Data Transfer Size: 131072 00:15:18.637 Max Number of Namespaces: 32 00:15:18.637 Max Number of I/O Queues: 127 00:15:18.637 NVMe Specification Version (VS): 1.3 00:15:18.637 NVMe Specification Version (Identify): 1.3 00:15:18.637 Maximum Queue Entries: 256 00:15:18.637 Contiguous Queues Required: Yes 00:15:18.637 Arbitration Mechanisms Supported 00:15:18.637 Weighted Round Robin: Not Supported 00:15:18.637 Vendor Specific: Not Supported 00:15:18.637 Reset Timeout: 15000 ms 00:15:18.637 Doorbell Stride: 4 bytes 00:15:18.637 NVM Subsystem Reset: Not Supported 00:15:18.637 Command Sets Supported 00:15:18.637 NVM Command Set: Supported 00:15:18.637 Boot Partition: Not Supported 00:15:18.637 Memory Page Size Minimum: 4096 bytes 00:15:18.637 Memory Page Size Maximum: 4096 bytes 00:15:18.637 Persistent Memory Region: Not Supported 00:15:18.637 Optional Asynchronous Events Supported 00:15:18.637 Namespace Attribute Notices: Supported 00:15:18.637 Firmware Activation Notices: Not Supported 00:15:18.637 ANA Change Notices: Not Supported 00:15:18.637 PLE Aggregate Log Change Notices: Not Supported 00:15:18.637 LBA Status Info Alert Notices: Not Supported 00:15:18.637 EGE Aggregate Log Change Notices: Not Supported 00:15:18.637 Normal NVM Subsystem Shutdown event: Not Supported 00:15:18.637 Zone Descriptor Change Notices: Not Supported 00:15:18.637 Discovery Log Change Notices: Not Supported 00:15:18.637 Controller Attributes 00:15:18.637 128-bit Host Identifier: Supported 00:15:18.637 Non-Operational Permissive Mode: Not Supported 00:15:18.637 NVM Sets: Not Supported 00:15:18.637 Read Recovery Levels: Not Supported 00:15:18.637 Endurance Groups: Not Supported 00:15:18.637 Predictable Latency Mode: Not Supported 00:15:18.637 Traffic Based Keep ALive: Not Supported 00:15:18.637 Namespace Granularity: Not Supported 00:15:18.637 SQ Associations: Not Supported 00:15:18.637 UUID List: Not Supported 00:15:18.637 Multi-Domain Subsystem: Not Supported 00:15:18.637 Fixed Capacity Management: Not Supported 00:15:18.637 Variable Capacity Management: Not Supported 00:15:18.637 Delete Endurance Group: Not Supported 00:15:18.637 Delete NVM Set: Not Supported 00:15:18.637 Extended LBA Formats Supported: Not Supported 00:15:18.637 Flexible Data Placement Supported: Not Supported 00:15:18.637 00:15:18.637 Controller Memory Buffer Support 00:15:18.637 ================================ 00:15:18.637 Supported: No 00:15:18.637 00:15:18.637 Persistent Memory Region Support 00:15:18.637 ================================ 00:15:18.637 Supported: No 00:15:18.637 00:15:18.637 Admin Command Set Attributes 00:15:18.637 ============================ 00:15:18.637 Security Send/Receive: Not Supported 00:15:18.637 Format NVM: Not Supported 00:15:18.637 Firmware Activate/Download: Not Supported 00:15:18.637 Namespace Management: Not Supported 00:15:18.637 Device Self-Test: Not Supported 00:15:18.637 Directives: Not Supported 00:15:18.637 NVMe-MI: Not Supported 00:15:18.637 Virtualization Management: Not Supported 00:15:18.637 Doorbell Buffer Config: Not Supported 00:15:18.637 Get LBA Status Capability: Not Supported 00:15:18.637 Command & Feature Lockdown Capability: Not Supported 00:15:18.637 Abort Command Limit: 4 00:15:18.637 Async Event Request Limit: 4 00:15:18.637 Number of Firmware Slots: N/A 00:15:18.637 Firmware Slot 1 Read-Only: N/A 00:15:18.637 Firmware Activation Without Reset: N/A 00:15:18.637 Multiple Update Detection Support: N/A 00:15:18.637 Firmware Update Granularity: No Information Provided 00:15:18.637 Per-Namespace SMART Log: No 00:15:18.637 Asymmetric Namespace Access Log Page: Not Supported 00:15:18.637 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:18.637 Command Effects Log Page: Supported 00:15:18.637 Get Log Page Extended Data: Supported 00:15:18.637 Telemetry Log Pages: Not Supported 00:15:18.637 Persistent Event Log Pages: Not Supported 00:15:18.637 Supported Log Pages Log Page: May Support 00:15:18.637 Commands Supported & Effects Log Page: Not Supported 00:15:18.637 Feature Identifiers & Effects Log Page:May Support 00:15:18.637 NVMe-MI Commands & Effects Log Page: May Support 00:15:18.637 Data Area 4 for Telemetry Log: Not Supported 00:15:18.637 Error Log Page Entries Supported: 128 00:15:18.637 Keep Alive: Supported 00:15:18.637 Keep Alive Granularity: 10000 ms 00:15:18.637 00:15:18.637 NVM Command Set Attributes 00:15:18.637 ========================== 00:15:18.637 Submission Queue Entry Size 00:15:18.637 Max: 64 00:15:18.637 Min: 64 00:15:18.637 Completion Queue Entry Size 00:15:18.637 Max: 16 00:15:18.637 Min: 16 00:15:18.637 Number of Namespaces: 32 00:15:18.637 Compare Command: Supported 00:15:18.637 Write Uncorrectable Command: Not Supported 00:15:18.637 Dataset Management Command: Supported 00:15:18.637 Write Zeroes Command: Supported 00:15:18.637 Set Features Save Field: Not Supported 00:15:18.637 Reservations: Not Supported 00:15:18.637 Timestamp: Not Supported 00:15:18.637 Copy: Supported 00:15:18.637 Volatile Write Cache: Present 00:15:18.637 Atomic Write Unit (Normal): 1 00:15:18.637 Atomic Write Unit (PFail): 1 00:15:18.637 Atomic Compare & Write Unit: 1 00:15:18.637 Fused Compare & Write: Supported 00:15:18.637 Scatter-Gather List 00:15:18.637 SGL Command Set: Supported (Dword aligned) 00:15:18.637 SGL Keyed: Not Supported 00:15:18.637 SGL Bit Bucket Descriptor: Not Supported 00:15:18.637 SGL Metadata Pointer: Not Supported 00:15:18.637 Oversized SGL: Not Supported 00:15:18.637 SGL Metadata Address: Not Supported 00:15:18.637 SGL Offset: Not Supported 00:15:18.637 Transport SGL Data Block: Not Supported 00:15:18.637 Replay Protected Memory Block: Not Supported 00:15:18.637 00:15:18.637 Firmware Slot Information 00:15:18.637 ========================= 00:15:18.638 Active slot: 1 00:15:18.638 Slot 1 Firmware Revision: 25.01 00:15:18.638 00:15:18.638 00:15:18.638 Commands Supported and Effects 00:15:18.638 ============================== 00:15:18.638 Admin Commands 00:15:18.638 -------------- 00:15:18.638 Get Log Page (02h): Supported 00:15:18.638 Identify (06h): Supported 00:15:18.638 Abort (08h): Supported 00:15:18.638 Set Features (09h): Supported 00:15:18.638 Get Features (0Ah): Supported 00:15:18.638 Asynchronous Event Request (0Ch): Supported 00:15:18.638 Keep Alive (18h): Supported 00:15:18.638 I/O Commands 00:15:18.638 ------------ 00:15:18.638 Flush (00h): Supported LBA-Change 00:15:18.638 Write (01h): Supported LBA-Change 00:15:18.638 Read (02h): Supported 00:15:18.638 Compare (05h): Supported 00:15:18.638 Write Zeroes (08h): Supported LBA-Change 00:15:18.638 Dataset Management (09h): Supported LBA-Change 00:15:18.638 Copy (19h): Supported LBA-Change 00:15:18.638 00:15:18.638 Error Log 00:15:18.638 ========= 00:15:18.638 00:15:18.638 Arbitration 00:15:18.638 =========== 00:15:18.638 Arbitration Burst: 1 00:15:18.638 00:15:18.638 Power Management 00:15:18.638 ================ 00:15:18.638 Number of Power States: 1 00:15:18.638 Current Power State: Power State #0 00:15:18.638 Power State #0: 00:15:18.638 Max Power: 0.00 W 00:15:18.638 Non-Operational State: Operational 00:15:18.638 Entry Latency: Not Reported 00:15:18.638 Exit Latency: Not Reported 00:15:18.638 Relative Read Throughput: 0 00:15:18.638 Relative Read Latency: 0 00:15:18.638 Relative Write Throughput: 0 00:15:18.638 Relative Write Latency: 0 00:15:18.638 Idle Power: Not Reported 00:15:18.638 Active Power: Not Reported 00:15:18.638 Non-Operational Permissive Mode: Not Supported 00:15:18.638 00:15:18.638 Health Information 00:15:18.638 ================== 00:15:18.638 Critical Warnings: 00:15:18.638 Available Spare Space: OK 00:15:18.638 Temperature: OK 00:15:18.638 Device Reliability: OK 00:15:18.638 Read Only: No 00:15:18.638 Volatile Memory Backup: OK 00:15:18.638 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:18.638 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:18.638 Available Spare: 0% 00:15:18.638 Available Sp[2024-10-08 18:25:47.014852] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:18.638 [2024-10-08 18:25:47.022663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:18.638 [2024-10-08 18:25:47.022714] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:18.638 [2024-10-08 18:25:47.022732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.638 [2024-10-08 18:25:47.022743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.638 [2024-10-08 18:25:47.022753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.638 [2024-10-08 18:25:47.022762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.638 [2024-10-08 18:25:47.022844] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:18.638 [2024-10-08 18:25:47.022865] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:18.638 [2024-10-08 18:25:47.023848] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:18.638 [2024-10-08 18:25:47.023921] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:18.638 [2024-10-08 18:25:47.023936] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:18.638 [2024-10-08 18:25:47.026660] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:18.638 [2024-10-08 18:25:47.026685] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 2 milliseconds 00:15:18.638 [2024-10-08 18:25:47.026743] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:18.638 [2024-10-08 18:25:47.027970] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:18.638 are Threshold: 0% 00:15:18.638 Life Percentage Used: 0% 00:15:18.638 Data Units Read: 0 00:15:18.638 Data Units Written: 0 00:15:18.638 Host Read Commands: 0 00:15:18.638 Host Write Commands: 0 00:15:18.638 Controller Busy Time: 0 minutes 00:15:18.638 Power Cycles: 0 00:15:18.638 Power On Hours: 0 hours 00:15:18.638 Unsafe Shutdowns: 0 00:15:18.638 Unrecoverable Media Errors: 0 00:15:18.638 Lifetime Error Log Entries: 0 00:15:18.638 Warning Temperature Time: 0 minutes 00:15:18.638 Critical Temperature Time: 0 minutes 00:15:18.638 00:15:18.638 Number of Queues 00:15:18.638 ================ 00:15:18.638 Number of I/O Submission Queues: 127 00:15:18.638 Number of I/O Completion Queues: 127 00:15:18.638 00:15:18.638 Active Namespaces 00:15:18.638 ================= 00:15:18.638 Namespace ID:1 00:15:18.638 Error Recovery Timeout: Unlimited 00:15:18.638 Command Set Identifier: NVM (00h) 00:15:18.638 Deallocate: Supported 00:15:18.638 Deallocated/Unwritten Error: Not Supported 00:15:18.638 Deallocated Read Value: Unknown 00:15:18.638 Deallocate in Write Zeroes: Not Supported 00:15:18.638 Deallocated Guard Field: 0xFFFF 00:15:18.638 Flush: Supported 00:15:18.638 Reservation: Supported 00:15:18.638 Namespace Sharing Capabilities: Multiple Controllers 00:15:18.638 Size (in LBAs): 131072 (0GiB) 00:15:18.638 Capacity (in LBAs): 131072 (0GiB) 00:15:18.638 Utilization (in LBAs): 131072 (0GiB) 00:15:18.638 NGUID: AD119DCB988B4AA6A00A8A8822DB3B8D 00:15:18.638 UUID: ad119dcb-988b-4aa6-a00a-8a8822db3b8d 00:15:18.638 Thin Provisioning: Not Supported 00:15:18.638 Per-NS Atomic Units: Yes 00:15:18.638 Atomic Boundary Size (Normal): 0 00:15:18.638 Atomic Boundary Size (PFail): 0 00:15:18.638 Atomic Boundary Offset: 0 00:15:18.638 Maximum Single Source Range Length: 65535 00:15:18.638 Maximum Copy Length: 65535 00:15:18.638 Maximum Source Range Count: 1 00:15:18.638 NGUID/EUI64 Never Reused: No 00:15:18.638 Namespace Write Protected: No 00:15:18.638 Number of LBA Formats: 1 00:15:18.638 Current LBA Format: LBA Format #00 00:15:18.638 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:18.638 00:15:18.638 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:18.899 [2024-10-08 18:25:47.270724] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.174 Initializing NVMe Controllers 00:15:24.174 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:24.174 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:24.174 Initialization complete. Launching workers. 00:15:24.174 ======================================================== 00:15:24.174 Latency(us) 00:15:24.174 Device Information : IOPS MiB/s Average min max 00:15:24.174 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31842.39 124.38 4021.33 1219.79 9275.60 00:15:24.174 ======================================================== 00:15:24.174 Total : 31842.39 124.38 4021.33 1219.79 9275.60 00:15:24.174 00:15:24.174 [2024-10-08 18:25:52.381028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.175 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:24.175 [2024-10-08 18:25:52.679857] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.513 Initializing NVMe Controllers 00:15:29.513 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.513 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:29.513 Initialization complete. Launching workers. 00:15:29.513 ======================================================== 00:15:29.513 Latency(us) 00:15:29.513 Device Information : IOPS MiB/s Average min max 00:15:29.513 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30351.92 118.56 4216.66 1246.12 7591.37 00:15:29.513 ======================================================== 00:15:29.513 Total : 30351.92 118.56 4216.66 1246.12 7591.37 00:15:29.513 00:15:29.513 [2024-10-08 18:25:57.703972] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.513 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:29.513 [2024-10-08 18:25:57.937573] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.903 [2024-10-08 18:26:03.074818] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.903 Initializing NVMe Controllers 00:15:34.903 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.903 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.903 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:34.903 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:34.903 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:34.903 Initialization complete. Launching workers. 00:15:34.903 Starting thread on core 2 00:15:34.903 Starting thread on core 3 00:15:34.903 Starting thread on core 1 00:15:34.903 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:35.164 [2024-10-08 18:26:03.455567] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.454 [2024-10-08 18:26:06.516172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.454 Initializing NVMe Controllers 00:15:38.454 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.454 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.454 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:38.454 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:38.454 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:38.454 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:38.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:38.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:38.454 Initialization complete. Launching workers. 00:15:38.454 Starting thread on core 1 with urgent priority queue 00:15:38.454 Starting thread on core 2 with urgent priority queue 00:15:38.454 Starting thread on core 3 with urgent priority queue 00:15:38.454 Starting thread on core 0 with urgent priority queue 00:15:38.454 SPDK bdev Controller (SPDK2 ) core 0: 5626.67 IO/s 17.77 secs/100000 ios 00:15:38.454 SPDK bdev Controller (SPDK2 ) core 1: 4991.33 IO/s 20.03 secs/100000 ios 00:15:38.454 SPDK bdev Controller (SPDK2 ) core 2: 5658.00 IO/s 17.67 secs/100000 ios 00:15:38.454 SPDK bdev Controller (SPDK2 ) core 3: 5257.00 IO/s 19.02 secs/100000 ios 00:15:38.455 ======================================================== 00:15:38.455 00:15:38.455 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:38.455 [2024-10-08 18:26:06.889201] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.455 Initializing NVMe Controllers 00:15:38.455 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.455 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.455 Namespace ID: 1 size: 0GB 00:15:38.455 Initialization complete. 00:15:38.455 INFO: using host memory buffer for IO 00:15:38.455 Hello world! 00:15:38.455 [2024-10-08 18:26:06.898414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.455 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:38.714 [2024-10-08 18:26:07.203052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.095 Initializing NVMe Controllers 00:15:40.095 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.095 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.095 Initialization complete. Launching workers. 00:15:40.095 submit (in ns) avg, min, max = 8208.7, 3502.2, 4029887.8 00:15:40.095 complete (in ns) avg, min, max = 26891.9, 2090.0, 4015468.9 00:15:40.095 00:15:40.095 Submit histogram 00:15:40.095 ================ 00:15:40.095 Range in us Cumulative Count 00:15:40.095 3.484 - 3.508: 0.0380% ( 5) 00:15:40.095 3.508 - 3.532: 0.4106% ( 49) 00:15:40.095 3.532 - 3.556: 1.7868% ( 181) 00:15:40.095 3.556 - 3.579: 5.4060% ( 476) 00:15:40.095 3.579 - 3.603: 11.0021% ( 736) 00:15:40.095 3.603 - 3.627: 20.0122% ( 1185) 00:15:40.095 3.627 - 3.650: 30.7558% ( 1413) 00:15:40.095 3.650 - 3.674: 40.4577% ( 1276) 00:15:40.095 3.674 - 3.698: 48.6238% ( 1074) 00:15:40.095 3.698 - 3.721: 56.0599% ( 978) 00:15:40.095 3.721 - 3.745: 60.6676% ( 606) 00:15:40.095 3.745 - 3.769: 65.1232% ( 586) 00:15:40.095 3.769 - 3.793: 68.3622% ( 426) 00:15:40.095 3.793 - 3.816: 71.7838% ( 450) 00:15:40.095 3.816 - 3.840: 74.5894% ( 369) 00:15:40.095 3.840 - 3.864: 78.1782% ( 472) 00:15:40.095 3.864 - 3.887: 81.7670% ( 472) 00:15:40.095 3.887 - 3.911: 84.5727% ( 369) 00:15:40.095 3.911 - 3.935: 87.0590% ( 327) 00:15:40.095 3.935 - 3.959: 88.9294% ( 246) 00:15:40.095 3.959 - 3.982: 90.3665% ( 189) 00:15:40.095 3.982 - 4.006: 91.7807% ( 186) 00:15:40.095 4.006 - 4.030: 92.9592% ( 155) 00:15:40.095 4.030 - 4.053: 93.9629% ( 132) 00:15:40.095 4.053 - 4.077: 94.6244% ( 87) 00:15:40.095 4.077 - 4.101: 95.1794% ( 73) 00:15:40.095 4.101 - 4.124: 95.5140% ( 44) 00:15:40.095 4.124 - 4.148: 95.7573% ( 32) 00:15:40.095 4.148 - 4.172: 95.9322% ( 23) 00:15:40.095 4.172 - 4.196: 96.0766% ( 19) 00:15:40.095 4.196 - 4.219: 96.1603% ( 11) 00:15:40.095 4.219 - 4.243: 96.2971% ( 18) 00:15:40.095 4.243 - 4.267: 96.3884% ( 12) 00:15:40.095 4.267 - 4.290: 96.4796% ( 12) 00:15:40.095 4.290 - 4.314: 96.5633% ( 11) 00:15:40.095 4.314 - 4.338: 96.6545% ( 12) 00:15:40.095 4.338 - 4.361: 96.7153% ( 8) 00:15:40.095 4.361 - 4.385: 96.7305% ( 2) 00:15:40.095 4.385 - 4.409: 96.7457% ( 2) 00:15:40.095 4.409 - 4.433: 96.7609% ( 2) 00:15:40.095 4.433 - 4.456: 96.7838% ( 3) 00:15:40.095 4.456 - 4.480: 96.8218% ( 5) 00:15:40.095 4.480 - 4.504: 96.8370% ( 2) 00:15:40.095 4.504 - 4.527: 96.8446% ( 1) 00:15:40.095 4.527 - 4.551: 96.8522% ( 1) 00:15:40.095 4.575 - 4.599: 96.8598% ( 1) 00:15:40.095 4.599 - 4.622: 96.8826% ( 3) 00:15:40.095 4.622 - 4.646: 96.8978% ( 2) 00:15:40.095 4.646 - 4.670: 96.9206% ( 3) 00:15:40.095 4.670 - 4.693: 96.9282% ( 1) 00:15:40.095 4.693 - 4.717: 96.9662% ( 5) 00:15:40.095 4.717 - 4.741: 97.0119% ( 6) 00:15:40.095 4.741 - 4.764: 97.0423% ( 4) 00:15:40.095 4.764 - 4.788: 97.0499% ( 1) 00:15:40.095 4.788 - 4.812: 97.0803% ( 4) 00:15:40.095 4.812 - 4.836: 97.1335% ( 7) 00:15:40.095 4.836 - 4.859: 97.1791% ( 6) 00:15:40.095 4.859 - 4.883: 97.2324% ( 7) 00:15:40.095 4.883 - 4.907: 97.2628% ( 4) 00:15:40.095 4.907 - 4.930: 97.3084% ( 6) 00:15:40.095 4.930 - 4.954: 97.3996% ( 12) 00:15:40.095 4.954 - 4.978: 97.4453% ( 6) 00:15:40.095 4.978 - 5.001: 97.4909% ( 6) 00:15:40.095 5.001 - 5.025: 97.5365% ( 6) 00:15:40.095 5.025 - 5.049: 97.5669% ( 4) 00:15:40.095 5.049 - 5.073: 97.6125% ( 6) 00:15:40.095 5.073 - 5.096: 97.6353% ( 3) 00:15:40.095 5.096 - 5.120: 97.6505% ( 2) 00:15:40.095 5.120 - 5.144: 97.6734% ( 3) 00:15:40.095 5.144 - 5.167: 97.7190% ( 6) 00:15:40.095 5.167 - 5.191: 97.7266% ( 1) 00:15:40.095 5.191 - 5.215: 97.7342% ( 1) 00:15:40.095 5.215 - 5.239: 97.7418% ( 1) 00:15:40.095 5.239 - 5.262: 97.7570% ( 2) 00:15:40.095 5.262 - 5.286: 97.7722% ( 2) 00:15:40.095 5.286 - 5.310: 97.7874% ( 2) 00:15:40.095 5.404 - 5.428: 97.7950% ( 1) 00:15:40.095 5.428 - 5.452: 97.8026% ( 1) 00:15:40.095 5.523 - 5.547: 97.8102% ( 1) 00:15:40.095 5.618 - 5.641: 97.8178% ( 1) 00:15:40.095 5.665 - 5.689: 97.8254% ( 1) 00:15:40.095 5.713 - 5.736: 97.8330% ( 1) 00:15:40.095 5.760 - 5.784: 97.8406% ( 1) 00:15:40.095 5.784 - 5.807: 97.8558% ( 2) 00:15:40.095 5.879 - 5.902: 97.8634% ( 1) 00:15:40.095 5.902 - 5.926: 97.8786% ( 2) 00:15:40.095 5.950 - 5.973: 97.8863% ( 1) 00:15:40.095 5.997 - 6.021: 97.8939% ( 1) 00:15:40.095 6.044 - 6.068: 97.9015% ( 1) 00:15:40.095 6.116 - 6.163: 97.9091% ( 1) 00:15:40.095 6.163 - 6.210: 97.9167% ( 1) 00:15:40.095 6.210 - 6.258: 97.9243% ( 1) 00:15:40.095 6.258 - 6.305: 97.9395% ( 2) 00:15:40.095 6.305 - 6.353: 97.9471% ( 1) 00:15:40.095 6.353 - 6.400: 97.9547% ( 1) 00:15:40.095 6.400 - 6.447: 97.9623% ( 1) 00:15:40.095 6.495 - 6.542: 97.9775% ( 2) 00:15:40.095 6.542 - 6.590: 97.9851% ( 1) 00:15:40.095 6.590 - 6.637: 97.9927% ( 1) 00:15:40.095 6.637 - 6.684: 98.0155% ( 3) 00:15:40.095 6.779 - 6.827: 98.0231% ( 1) 00:15:40.095 6.827 - 6.874: 98.0383% ( 2) 00:15:40.095 6.874 - 6.921: 98.0459% ( 1) 00:15:40.095 7.016 - 7.064: 98.0535% ( 1) 00:15:40.095 7.064 - 7.111: 98.0763% ( 3) 00:15:40.095 7.111 - 7.159: 98.0915% ( 2) 00:15:40.095 7.159 - 7.206: 98.1068% ( 2) 00:15:40.095 7.206 - 7.253: 98.1296% ( 3) 00:15:40.095 7.253 - 7.301: 98.1372% ( 1) 00:15:40.095 7.301 - 7.348: 98.1524% ( 2) 00:15:40.095 7.348 - 7.396: 98.1600% ( 1) 00:15:40.095 7.396 - 7.443: 98.1752% ( 2) 00:15:40.095 7.443 - 7.490: 98.1904% ( 2) 00:15:40.095 7.490 - 7.538: 98.2132% ( 3) 00:15:40.095 7.538 - 7.585: 98.2208% ( 1) 00:15:40.095 7.585 - 7.633: 98.2284% ( 1) 00:15:40.095 7.633 - 7.680: 98.2436% ( 2) 00:15:40.095 7.680 - 7.727: 98.2588% ( 2) 00:15:40.095 7.727 - 7.775: 98.2664% ( 1) 00:15:40.095 7.822 - 7.870: 98.2740% ( 1) 00:15:40.095 7.917 - 7.964: 98.2816% ( 1) 00:15:40.095 7.964 - 8.012: 98.3120% ( 4) 00:15:40.095 8.012 - 8.059: 98.3196% ( 1) 00:15:40.095 8.107 - 8.154: 98.3273% ( 1) 00:15:40.095 8.201 - 8.249: 98.3501% ( 3) 00:15:40.095 8.249 - 8.296: 98.3653% ( 2) 00:15:40.095 8.296 - 8.344: 98.3729% ( 1) 00:15:40.095 8.391 - 8.439: 98.3805% ( 1) 00:15:40.095 8.439 - 8.486: 98.3957% ( 2) 00:15:40.096 8.533 - 8.581: 98.4109% ( 2) 00:15:40.096 8.628 - 8.676: 98.4185% ( 1) 00:15:40.096 8.676 - 8.723: 98.4261% ( 1) 00:15:40.096 8.723 - 8.770: 98.4413% ( 2) 00:15:40.096 8.818 - 8.865: 98.4565% ( 2) 00:15:40.096 8.865 - 8.913: 98.4641% ( 1) 00:15:40.096 9.007 - 9.055: 98.4717% ( 1) 00:15:40.096 9.102 - 9.150: 98.4793% ( 1) 00:15:40.096 9.244 - 9.292: 98.4869% ( 1) 00:15:40.096 9.292 - 9.339: 98.5097% ( 3) 00:15:40.096 9.387 - 9.434: 98.5173% ( 1) 00:15:40.096 9.481 - 9.529: 98.5325% ( 2) 00:15:40.096 9.576 - 9.624: 98.5477% ( 2) 00:15:40.096 9.624 - 9.671: 98.5554% ( 1) 00:15:40.096 9.719 - 9.766: 98.5630% ( 1) 00:15:40.096 9.766 - 9.813: 98.5782% ( 2) 00:15:40.096 9.861 - 9.908: 98.5858% ( 1) 00:15:40.096 9.956 - 10.003: 98.5934% ( 1) 00:15:40.096 10.524 - 10.572: 98.6010% ( 1) 00:15:40.096 10.619 - 10.667: 98.6086% ( 1) 00:15:40.096 10.714 - 10.761: 98.6162% ( 1) 00:15:40.096 10.761 - 10.809: 98.6314% ( 2) 00:15:40.096 10.809 - 10.856: 98.6466% ( 2) 00:15:40.096 10.856 - 10.904: 98.6618% ( 2) 00:15:40.096 10.951 - 10.999: 98.6694% ( 1) 00:15:40.096 11.188 - 11.236: 98.6922% ( 3) 00:15:40.096 11.330 - 11.378: 98.7074% ( 2) 00:15:40.096 11.520 - 11.567: 98.7226% ( 2) 00:15:40.096 11.615 - 11.662: 98.7302% ( 1) 00:15:40.096 11.662 - 11.710: 98.7454% ( 2) 00:15:40.096 11.804 - 11.852: 98.7530% ( 1) 00:15:40.096 11.852 - 11.899: 98.7682% ( 2) 00:15:40.096 11.899 - 11.947: 98.7759% ( 1) 00:15:40.096 12.089 - 12.136: 98.7835% ( 1) 00:15:40.096 12.231 - 12.326: 98.7911% ( 1) 00:15:40.096 12.326 - 12.421: 98.7987% ( 1) 00:15:40.096 12.516 - 12.610: 98.8215% ( 3) 00:15:40.096 12.610 - 12.705: 98.8367% ( 2) 00:15:40.096 12.705 - 12.800: 98.8443% ( 1) 00:15:40.096 12.895 - 12.990: 98.8519% ( 1) 00:15:40.096 12.990 - 13.084: 98.8823% ( 4) 00:15:40.096 13.084 - 13.179: 98.9051% ( 3) 00:15:40.096 13.274 - 13.369: 98.9127% ( 1) 00:15:40.096 13.369 - 13.464: 98.9203% ( 1) 00:15:40.096 13.559 - 13.653: 98.9279% ( 1) 00:15:40.096 13.748 - 13.843: 98.9355% ( 1) 00:15:40.096 13.843 - 13.938: 98.9431% ( 1) 00:15:40.096 13.938 - 14.033: 98.9583% ( 2) 00:15:40.096 14.033 - 14.127: 98.9735% ( 2) 00:15:40.096 14.412 - 14.507: 98.9811% ( 1) 00:15:40.096 14.507 - 14.601: 98.9887% ( 1) 00:15:40.096 14.601 - 14.696: 98.9964% ( 1) 00:15:40.096 14.981 - 15.076: 99.0040% ( 1) 00:15:40.096 15.265 - 15.360: 99.0116% ( 1) 00:15:40.096 15.360 - 15.455: 99.0268% ( 2) 00:15:40.096 15.550 - 15.644: 99.0344% ( 1) 00:15:40.096 16.972 - 17.067: 99.0496% ( 2) 00:15:40.096 17.161 - 17.256: 99.0800% ( 4) 00:15:40.096 17.256 - 17.351: 99.0952% ( 2) 00:15:40.096 17.351 - 17.446: 99.1180% ( 3) 00:15:40.096 17.446 - 17.541: 99.1484% ( 4) 00:15:40.096 17.541 - 17.636: 99.1864% ( 5) 00:15:40.096 17.636 - 17.730: 99.2168% ( 4) 00:15:40.096 17.730 - 17.825: 99.2777% ( 8) 00:15:40.096 17.825 - 17.920: 99.3765% ( 13) 00:15:40.096 17.920 - 18.015: 99.4297% ( 7) 00:15:40.096 18.015 - 18.110: 99.4602% ( 4) 00:15:40.096 18.110 - 18.204: 99.4906% ( 4) 00:15:40.096 18.204 - 18.299: 99.5058% ( 2) 00:15:40.096 18.299 - 18.394: 99.5362% ( 4) 00:15:40.096 18.394 - 18.489: 99.6274% ( 12) 00:15:40.096 18.489 - 18.584: 99.7111% ( 11) 00:15:40.096 18.584 - 18.679: 99.7415% ( 4) 00:15:40.096 18.679 - 18.773: 99.7719% ( 4) 00:15:40.096 18.868 - 18.963: 99.7871% ( 2) 00:15:40.096 18.963 - 19.058: 99.7947% ( 1) 00:15:40.096 19.153 - 19.247: 99.8099% ( 2) 00:15:40.096 19.247 - 19.342: 99.8175% ( 1) 00:15:40.096 19.342 - 19.437: 99.8327% ( 2) 00:15:40.096 19.816 - 19.911: 99.8403% ( 1) 00:15:40.096 20.385 - 20.480: 99.8479% ( 1) 00:15:40.096 22.661 - 22.756: 99.8555% ( 1) 00:15:40.096 23.514 - 23.609: 99.8631% ( 1) 00:15:40.096 25.410 - 25.600: 99.8707% ( 1) 00:15:40.096 27.496 - 27.686: 99.8783% ( 1) 00:15:40.096 28.065 - 28.255: 99.8859% ( 1) 00:15:40.096 28.824 - 29.013: 99.8936% ( 1) 00:15:40.096 3980.705 - 4004.978: 99.9696% ( 10) 00:15:40.096 4004.978 - 4029.250: 99.9924% ( 3) 00:15:40.096 4029.250 - 4053.523: 100.0000% ( 1) 00:15:40.096 00:15:40.096 Complete histogram 00:15:40.096 ================== 00:15:40.096 Range in us Cumulative Count 00:15:40.096 2.086 - 2.098: 0.1977% ( 26) 00:15:40.096 2.098 - 2.110: 2.5852% ( 314) 00:15:40.096 2.110 - 2.121: 11.5572% ( 1180) 00:15:40.096 2.121 - 2.133: 22.8026% ( 1479) 00:15:40.096 2.133 - 2.145: 39.4237% ( 2186) 00:15:40.096 2.145 - 2.157: 48.9659% ( 1255) 00:15:40.096 2.157 - 2.169: 58.5386% ( 1259) 00:15:40.096 2.169 - 2.181: 66.0964% ( 994) 00:15:40.096 2.181 - 2.193: 69.2898% ( 420) 00:15:40.096 2.193 - 2.204: 73.4641% ( 549) 00:15:40.096 2.204 - 2.216: 78.8473% ( 708) 00:15:40.096 2.216 - 2.228: 82.3297% ( 458) 00:15:40.096 2.228 - 2.240: 86.3975% ( 535) 00:15:40.096 2.240 - 2.252: 89.7050% ( 435) 00:15:40.096 2.252 - 2.264: 90.4349% ( 96) 00:15:40.096 2.264 - 2.276: 91.0888% ( 86) 00:15:40.096 2.276 - 2.287: 92.4726% ( 182) 00:15:40.096 2.287 - 2.299: 93.7348% ( 166) 00:15:40.096 2.299 - 2.311: 94.3887% ( 86) 00:15:40.096 2.311 - 2.323: 94.8981% ( 67) 00:15:40.096 2.323 - 2.335: 95.1262% ( 30) 00:15:40.096 2.335 - 2.347: 95.2707% ( 19) 00:15:40.096 2.347 - 2.359: 95.4532% ( 24) 00:15:40.096 2.359 - 2.370: 95.6813% ( 30) 00:15:40.096 2.370 - 2.382: 95.7953% ( 15) 00:15:40.096 2.382 - 2.394: 95.8333% ( 5) 00:15:40.096 2.394 - 2.406: 95.9246% ( 12) 00:15:40.096 2.406 - 2.418: 96.0614% ( 18) 00:15:40.096 2.418 - 2.430: 96.3960% ( 44) 00:15:40.096 2.430 - 2.441: 96.7001% ( 40) 00:15:40.096 2.441 - 2.453: 96.9738% ( 36) 00:15:40.096 2.453 - 2.465: 97.2400% ( 35) 00:15:40.096 2.465 - 2.477: 97.4377% ( 26) 00:15:40.097 2.477 - 2.489: 97.6353% ( 26) 00:15:40.097 2.489 - 2.501: 97.8178% ( 24) 00:15:40.097 2.501 - 2.513: 97.8863% ( 9) 00:15:40.097 2.513 - 2.524: 98.0003% ( 15) 00:15:40.097 2.524 - 2.536: 98.0459% ( 6) 00:15:40.097 2.536 - 2.548: 98.0763% ( 4) 00:15:40.097 2.548 - 2.560: 98.0991% ( 3) 00:15:40.097 2.560 - 2.572: 98.1600% ( 8) 00:15:40.097 2.572 - 2.584: 98.1980% ( 5) 00:15:40.097 2.584 - 2.596: 98.2360% ( 5) 00:15:40.097 2.596 - 2.607: 98.2968% ( 8) 00:15:40.097 2.607 - 2.619: 98.3273% ( 4) 00:15:40.097 2.619 - 2.631: 98.3425% ( 2) 00:15:40.097 2.631 - 2.643: 98.3501% ( 1) 00:15:40.097 2.655 - 2.667: 98.3729% ( 3) 00:15:40.097 2.667 - 2.679: 98.3805% ( 1) 00:15:40.097 2.679 - 2.690: 98.4033% ( 3) 00:15:40.097 2.702 - 2.714: 98.4109% ( 1) 00:15:40.097 2.714 - 2.726: 98.4185% ( 1) 00:15:40.097 2.738 - 2.750: 98.4337% ( 2) 00:15:40.097 2.761 - 2.773: 98.4413% ( 1) 00:15:40.097 2.785 - 2.797: 98.4489% ( 1) 00:15:40.097 2.797 - 2.809: 98.4565% ( 1) 00:15:40.097 2.844 - 2.856: 98.4717% ( 2) 00:15:40.097 2.880 - 2.892: 98.4793% ( 1) 00:15:40.097 2.904 - 2.916: 98.4869% ( 1) 00:15:40.097 2.916 - 2.927: 98.4945% ( 1) 00:15:40.097 2.951 - 2.963: 98.5021% ( 1) 00:15:40.097 2.987 - 2.999: 98.5097% ( 1) 00:15:40.097 3.224 - 3.247: 98.5173% ( 1) 00:15:40.097 3.603 - 3.627: 98.5249% ( 1) 00:15:40.097 3.627 - 3.650: 98.5477% ( 3) 00:15:40.097 3.650 - 3.674: 98.5554% ( 1) 00:15:40.097 3.698 - 3.721: 98.5782% ( 3) 00:15:40.097 3.721 - 3.745: 98.5934% ( 2) 00:15:40.097 3.745 - 3.769: 98.6086% ( 2) 00:15:40.097 3.769 - 3.793: 98.6162% ( 1) 00:15:40.097 3.793 - 3.816: 98.6314% ( 2) 00:15:40.097 3.816 - 3.840: 98.6466% ( 2) 00:15:40.097 3.840 - 3.864: 98.6618% ( 2) 00:15:40.097 3.887 - 3.911: 98.6694% ( 1) 00:15:40.097 3.935 - 3.959: 98.6770% ( 1) 00:15:40.097 4.006 - 4.030: 98.6846% ( 1) 00:15:40.097 4.219 - 4.243: 98.6922% ( 1) 00:15:40.097 5.001 - 5.025: 98.6998% ( 1) 00:15:40.097 5.144 - 5.167: 98.7074% ( 1) 00:15:40.097 5.167 - 5.191: 98.7150% ( 1) 00:15:40.097 5.357 - 5.381: 98.7226% ( 1) 00:15:40.097 6.021 - 6.044: 98.7302% ( 1) 00:15:40.097 6.116 - 6.163: 98.7378% ( 1) 00:15:40.097 6.163 - 6.210: 98.7454% ( 1) 00:15:40.097 6.305 - 6.353: 98.7530% ( 1) 00:15:40.097 6.447 - 6.495: 98.7682% ( 2) 00:15:40.097 6.542 - 6.590: 98.7759% ( 1) 00:15:40.097 6.590 - 6.637: 98.7835% ( 1) 00:15:40.097 6.637 - 6.684: 98.7987% ( 2) 00:15:40.097 6.779 - 6.827: 98.8063% ( 1) 00:15:40.097 7.064 - 7.111: 98.8215% ( 2) 00:15:40.097 7.633 - 7.680: 98.8367% ( 2) 00:15:40.097 7.870 - 7.917: 98.8443% ( 1) 00:15:40.097 8.296 - 8.344: 98.8519% ( 1) 00:15:40.097 9.150 - 9.197: 98.8595% ( 1) 00:15:40.097 13.179 - 13.274: 98.8671% ( 1) 00:15:40.097 13.843 - 13.938: 98.8747% ( 1) 00:15:40.097 15.455 - 15.550: 98.8823% ( 1) 00:15:40.097 15.550 - 15.644: 98.8899% ( 1) 00:15:40.097 15.644 - 15.739: 98.8975% ( 1) 00:15:40.097 15.739 - 15.834: 98.9051% ( 1) 00:15:40.097 15.834 - 15.929: 98.9279% ( 3) 00:15:40.097 15.929 - 16.024: 98.9583% ( 4) 00:15:40.097 16.024 - 16.119: 98.9887% ( 4) 00:15:40.097 16.119 - 16.213: 99.0116% ( 3) 00:15:40.097 16.213 - 16.308: 99.0496% ( 5) 00:15:40.097 16.308 - 16.403: 99.0876% ( 5) 00:15:40.097 16.403 - 16.498: 99.1028% ( 2) 00:15:40.097 16.498 - 16.593: 99.1408% ( 5) 00:15:40.097 16.593 - 16.687: 99.1712% ( 4) 00:15:40.097 16.687 - 16.782: 99.2092% ( 5) 00:15:40.097 16.782 - 16.877: 99.2397% ( 4) 00:15:40.097 16.877 - 16.972: 99.2777% ( 5) 00:15:40.097 16.972 - 17.067: 99.3005% ( 3) 00:15:40.097 17.161 - 17.256: 99.3081% ( 1) 00:15:40.097 17.256 - 17.351: 99.3309% ( 3) 00:15:40.097 17.541 - 17.636: 99.3385% ( 1) 00:15:40.097 17.636 - 17.730: 99.3461% ( 1) 00:15:40.097 17.730 - 17.825: 99.3537% ( 1) 00:15:40.097 17.825 - 17.920: 99.3613%[2024-10-08 18:26:08.297474] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.097 ( 1) 00:15:40.097 18.110 - 18.204: 99.3689% ( 1) 00:15:40.097 18.394 - 18.489: 99.3765% ( 1) 00:15:40.097 20.670 - 20.764: 99.3841% ( 1) 00:15:40.097 3883.615 - 3907.887: 99.3917% ( 1) 00:15:40.097 3980.705 - 4004.978: 99.8859% ( 65) 00:15:40.097 4004.978 - 4029.250: 100.0000% ( 15) 00:15:40.097 00:15:40.097 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:40.097 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:40.097 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:40.097 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:40.097 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:40.667 [ 00:15:40.667 { 00:15:40.667 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:40.667 "subtype": "Discovery", 00:15:40.667 "listen_addresses": [], 00:15:40.668 "allow_any_host": true, 00:15:40.668 "hosts": [] 00:15:40.668 }, 00:15:40.668 { 00:15:40.668 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:40.668 "subtype": "NVMe", 00:15:40.668 "listen_addresses": [ 00:15:40.668 { 00:15:40.668 "trtype": "VFIOUSER", 00:15:40.668 "adrfam": "IPv4", 00:15:40.668 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:40.668 "trsvcid": "0" 00:15:40.668 } 00:15:40.668 ], 00:15:40.668 "allow_any_host": true, 00:15:40.668 "hosts": [], 00:15:40.668 "serial_number": "SPDK1", 00:15:40.668 "model_number": "SPDK bdev Controller", 00:15:40.668 "max_namespaces": 32, 00:15:40.668 "min_cntlid": 1, 00:15:40.668 "max_cntlid": 65519, 00:15:40.668 "namespaces": [ 00:15:40.668 { 00:15:40.668 "nsid": 1, 00:15:40.668 "bdev_name": "Malloc1", 00:15:40.668 "name": "Malloc1", 00:15:40.668 "nguid": "165DD091A25941EF9FCEF979697DA550", 00:15:40.668 "uuid": "165dd091-a259-41ef-9fce-f979697da550" 00:15:40.668 }, 00:15:40.668 { 00:15:40.668 "nsid": 2, 00:15:40.668 "bdev_name": "Malloc3", 00:15:40.668 "name": "Malloc3", 00:15:40.668 "nguid": "09A25FB809404B2B9B763594F411A01A", 00:15:40.668 "uuid": "09a25fb8-0940-4b2b-9b76-3594f411a01a" 00:15:40.668 } 00:15:40.668 ] 00:15:40.668 }, 00:15:40.668 { 00:15:40.668 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:40.668 "subtype": "NVMe", 00:15:40.668 "listen_addresses": [ 00:15:40.668 { 00:15:40.668 "trtype": "VFIOUSER", 00:15:40.668 "adrfam": "IPv4", 00:15:40.668 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:40.668 "trsvcid": "0" 00:15:40.668 } 00:15:40.668 ], 00:15:40.668 "allow_any_host": true, 00:15:40.668 "hosts": [], 00:15:40.668 "serial_number": "SPDK2", 00:15:40.668 "model_number": "SPDK bdev Controller", 00:15:40.668 "max_namespaces": 32, 00:15:40.668 "min_cntlid": 1, 00:15:40.668 "max_cntlid": 65519, 00:15:40.668 "namespaces": [ 00:15:40.668 { 00:15:40.668 "nsid": 1, 00:15:40.668 "bdev_name": "Malloc2", 00:15:40.668 "name": "Malloc2", 00:15:40.668 "nguid": "AD119DCB988B4AA6A00A8A8822DB3B8D", 00:15:40.668 "uuid": "ad119dcb-988b-4aa6-a00a-8a8822db3b8d" 00:15:40.668 } 00:15:40.668 ] 00:15:40.668 } 00:15:40.668 ] 00:15:40.668 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:40.668 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1178265 00:15:40.668 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:40.668 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:40.668 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:40.668 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:40.668 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:40.668 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:40.668 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:40.668 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:40.926 [2024-10-08 18:26:09.229166] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.186 Malloc4 00:15:41.186 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:41.756 [2024-10-08 18:26:10.173426] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.756 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:41.756 Asynchronous Event Request test 00:15:41.756 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.756 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.757 Registering asynchronous event callbacks... 00:15:41.757 Starting namespace attribute notice tests for all controllers... 00:15:41.757 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:41.757 aer_cb - Changed Namespace 00:15:41.757 Cleaning up... 00:15:42.328 [ 00:15:42.328 { 00:15:42.328 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.328 "subtype": "Discovery", 00:15:42.328 "listen_addresses": [], 00:15:42.328 "allow_any_host": true, 00:15:42.328 "hosts": [] 00:15:42.328 }, 00:15:42.328 { 00:15:42.328 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.328 "subtype": "NVMe", 00:15:42.328 "listen_addresses": [ 00:15:42.328 { 00:15:42.328 "trtype": "VFIOUSER", 00:15:42.328 "adrfam": "IPv4", 00:15:42.328 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.328 "trsvcid": "0" 00:15:42.328 } 00:15:42.328 ], 00:15:42.328 "allow_any_host": true, 00:15:42.328 "hosts": [], 00:15:42.328 "serial_number": "SPDK1", 00:15:42.328 "model_number": "SPDK bdev Controller", 00:15:42.328 "max_namespaces": 32, 00:15:42.328 "min_cntlid": 1, 00:15:42.328 "max_cntlid": 65519, 00:15:42.328 "namespaces": [ 00:15:42.328 { 00:15:42.328 "nsid": 1, 00:15:42.328 "bdev_name": "Malloc1", 00:15:42.328 "name": "Malloc1", 00:15:42.328 "nguid": "165DD091A25941EF9FCEF979697DA550", 00:15:42.328 "uuid": "165dd091-a259-41ef-9fce-f979697da550" 00:15:42.328 }, 00:15:42.328 { 00:15:42.328 "nsid": 2, 00:15:42.328 "bdev_name": "Malloc3", 00:15:42.328 "name": "Malloc3", 00:15:42.328 "nguid": "09A25FB809404B2B9B763594F411A01A", 00:15:42.328 "uuid": "09a25fb8-0940-4b2b-9b76-3594f411a01a" 00:15:42.328 } 00:15:42.328 ] 00:15:42.328 }, 00:15:42.328 { 00:15:42.328 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.328 "subtype": "NVMe", 00:15:42.328 "listen_addresses": [ 00:15:42.328 { 00:15:42.328 "trtype": "VFIOUSER", 00:15:42.328 "adrfam": "IPv4", 00:15:42.328 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.328 "trsvcid": "0" 00:15:42.328 } 00:15:42.328 ], 00:15:42.328 "allow_any_host": true, 00:15:42.329 "hosts": [], 00:15:42.329 "serial_number": "SPDK2", 00:15:42.329 "model_number": "SPDK bdev Controller", 00:15:42.329 "max_namespaces": 32, 00:15:42.329 "min_cntlid": 1, 00:15:42.329 "max_cntlid": 65519, 00:15:42.329 "namespaces": [ 00:15:42.329 { 00:15:42.329 "nsid": 1, 00:15:42.329 "bdev_name": "Malloc2", 00:15:42.329 "name": "Malloc2", 00:15:42.329 "nguid": "AD119DCB988B4AA6A00A8A8822DB3B8D", 00:15:42.329 "uuid": "ad119dcb-988b-4aa6-a00a-8a8822db3b8d" 00:15:42.329 }, 00:15:42.329 { 00:15:42.329 "nsid": 2, 00:15:42.329 "bdev_name": "Malloc4", 00:15:42.329 "name": "Malloc4", 00:15:42.329 "nguid": "BEE6CF5A24FC4049A79FFB36C672EB27", 00:15:42.329 "uuid": "bee6cf5a-24fc-4049-a79f-fb36c672eb27" 00:15:42.329 } 00:15:42.329 ] 00:15:42.329 } 00:15:42.329 ] 00:15:42.329 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1178265 00:15:42.329 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:42.329 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1172405 00:15:42.329 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1172405 ']' 00:15:42.329 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1172405 00:15:42.329 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:42.329 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:42.329 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1172405 00:15:42.329 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:42.329 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:42.329 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1172405' 00:15:42.329 killing process with pid 1172405 00:15:42.329 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1172405 00:15:42.329 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1172405 00:15:42.900 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:42.900 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:42.900 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:42.900 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:42.900 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:42.900 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1178535 00:15:42.900 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:42.900 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1178535' 00:15:42.900 Process pid: 1178535 00:15:42.900 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:42.900 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1178535 00:15:42.900 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1178535 ']' 00:15:42.900 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.900 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.901 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.901 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.901 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:42.901 [2024-10-08 18:26:11.325759] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:42.901 [2024-10-08 18:26:11.327000] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:15:42.901 [2024-10-08 18:26:11.327080] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.901 [2024-10-08 18:26:11.427936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.160 [2024-10-08 18:26:11.619837] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.160 [2024-10-08 18:26:11.619892] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.160 [2024-10-08 18:26:11.619908] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.160 [2024-10-08 18:26:11.619922] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.160 [2024-10-08 18:26:11.619934] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.160 [2024-10-08 18:26:11.622793] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.160 [2024-10-08 18:26:11.622834] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.161 [2024-10-08 18:26:11.622867] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.161 [2024-10-08 18:26:11.622872] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.422 [2024-10-08 18:26:11.797606] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:43.422 [2024-10-08 18:26:11.797952] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:43.422 [2024-10-08 18:26:11.798300] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:43.422 [2024-10-08 18:26:11.799258] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:43.422 [2024-10-08 18:26:11.799780] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:43.422 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.422 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:43.422 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:44.365 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:44.933 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:44.933 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:44.933 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:44.933 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:44.933 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:45.193 Malloc1 00:15:45.193 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:45.764 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:46.704 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:46.964 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:46.964 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:46.964 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:47.536 Malloc2 00:15:47.536 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:48.105 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:49.044 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:49.615 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:49.615 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1178535 00:15:49.615 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1178535 ']' 00:15:49.615 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1178535 00:15:49.615 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:49.615 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:49.615 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1178535 00:15:49.615 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:49.615 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:49.615 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1178535' 00:15:49.615 killing process with pid 1178535 00:15:49.615 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1178535 00:15:49.615 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1178535 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:50.185 00:15:50.185 real 1m1.024s 00:15:50.185 user 3m54.709s 00:15:50.185 sys 0m5.925s 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:50.185 ************************************ 00:15:50.185 END TEST nvmf_vfio_user 00:15:50.185 ************************************ 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:50.185 ************************************ 00:15:50.185 START TEST nvmf_vfio_user_nvme_compliance 00:15:50.185 ************************************ 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:50.185 * Looking for test storage... 00:15:50.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.185 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:50.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.186 --rc genhtml_branch_coverage=1 00:15:50.186 --rc genhtml_function_coverage=1 00:15:50.186 --rc genhtml_legend=1 00:15:50.186 --rc geninfo_all_blocks=1 00:15:50.186 --rc geninfo_unexecuted_blocks=1 00:15:50.186 00:15:50.186 ' 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:50.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.186 --rc genhtml_branch_coverage=1 00:15:50.186 --rc genhtml_function_coverage=1 00:15:50.186 --rc genhtml_legend=1 00:15:50.186 --rc geninfo_all_blocks=1 00:15:50.186 --rc geninfo_unexecuted_blocks=1 00:15:50.186 00:15:50.186 ' 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:50.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.186 --rc genhtml_branch_coverage=1 00:15:50.186 --rc genhtml_function_coverage=1 00:15:50.186 --rc genhtml_legend=1 00:15:50.186 --rc geninfo_all_blocks=1 00:15:50.186 --rc geninfo_unexecuted_blocks=1 00:15:50.186 00:15:50.186 ' 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:50.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.186 --rc genhtml_branch_coverage=1 00:15:50.186 --rc genhtml_function_coverage=1 00:15:50.186 --rc genhtml_legend=1 00:15:50.186 --rc geninfo_all_blocks=1 00:15:50.186 --rc geninfo_unexecuted_blocks=1 00:15:50.186 00:15:50.186 ' 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:50.186 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:50.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1179504 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1179504' 00:15:50.445 Process pid: 1179504 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1179504 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1179504 ']' 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:50.445 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.445 [2024-10-08 18:26:18.785081] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:15:50.445 [2024-10-08 18:26:18.785173] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.445 [2024-10-08 18:26:18.851922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:50.705 [2024-10-08 18:26:19.031730] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.705 [2024-10-08 18:26:19.031850] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.705 [2024-10-08 18:26:19.031886] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.705 [2024-10-08 18:26:19.031915] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.705 [2024-10-08 18:26:19.031943] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.705 [2024-10-08 18:26:19.034011] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.705 [2024-10-08 18:26:19.034110] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.705 [2024-10-08 18:26:19.034121] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.642 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.642 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:51.642 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.581 malloc0 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.581 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.582 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.582 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:52.582 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.582 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:52.582 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.582 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:52.840 00:15:52.840 00:15:52.840 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.840 http://cunit.sourceforge.net/ 00:15:52.840 00:15:52.840 00:15:52.840 Suite: nvme_compliance 00:15:52.841 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-08 18:26:21.243619] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.841 [2024-10-08 18:26:21.245231] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:52.841 [2024-10-08 18:26:21.245299] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:52.841 [2024-10-08 18:26:21.245331] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:52.841 [2024-10-08 18:26:21.246695] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.841 passed 00:15:53.100 Test: admin_identify_ctrlr_verify_fused ...[2024-10-08 18:26:21.378967] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.100 [2024-10-08 18:26:21.381984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.100 passed 00:15:53.100 Test: admin_identify_ns ...[2024-10-08 18:26:21.518862] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.100 [2024-10-08 18:26:21.579707] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:53.100 [2024-10-08 18:26:21.587704] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:53.100 [2024-10-08 18:26:21.608854] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.361 passed 00:15:53.361 Test: admin_get_features_mandatory_features ...[2024-10-08 18:26:21.747895] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.361 [2024-10-08 18:26:21.750953] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.361 passed 00:15:53.361 Test: admin_get_features_optional_features ...[2024-10-08 18:26:21.886292] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.361 [2024-10-08 18:26:21.889338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.620 passed 00:15:53.620 Test: admin_set_features_number_of_queues ...[2024-10-08 18:26:22.022425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.620 [2024-10-08 18:26:22.124843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.879 passed 00:15:53.879 Test: admin_get_log_page_mandatory_logs ...[2024-10-08 18:26:22.261182] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.879 [2024-10-08 18:26:22.264229] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.879 passed 00:15:53.879 Test: admin_get_log_page_with_lpo ...[2024-10-08 18:26:22.398667] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.140 [2024-10-08 18:26:22.468700] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:54.140 [2024-10-08 18:26:22.481796] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.140 passed 00:15:54.140 Test: fabric_property_get ...[2024-10-08 18:26:22.613755] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.140 [2024-10-08 18:26:22.615299] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:54.140 [2024-10-08 18:26:22.616783] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.399 passed 00:15:54.399 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-08 18:26:22.751070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.399 [2024-10-08 18:26:22.752781] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:54.399 [2024-10-08 18:26:22.754116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.399 passed 00:15:54.399 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-08 18:26:22.887411] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.658 [2024-10-08 18:26:22.970672] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:54.658 [2024-10-08 18:26:22.986670] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:54.658 [2024-10-08 18:26:22.991833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.658 passed 00:15:54.658 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-08 18:26:23.129721] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.658 [2024-10-08 18:26:23.131238] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:54.658 [2024-10-08 18:26:23.132736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.658 passed 00:15:54.918 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-08 18:26:23.266274] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.918 [2024-10-08 18:26:23.342678] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:54.918 [2024-10-08 18:26:23.366673] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:54.918 [2024-10-08 18:26:23.371823] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.918 passed 00:15:55.179 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-08 18:26:23.511723] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.179 [2024-10-08 18:26:23.513236] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:55.179 [2024-10-08 18:26:23.513333] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:55.179 [2024-10-08 18:26:23.514750] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.179 passed 00:15:55.179 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-08 18:26:23.647467] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.438 [2024-10-08 18:26:23.740722] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:55.438 [2024-10-08 18:26:23.748676] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:55.438 [2024-10-08 18:26:23.756676] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:55.438 [2024-10-08 18:26:23.764697] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:55.438 [2024-10-08 18:26:23.793812] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.438 passed 00:15:55.439 Test: admin_create_io_sq_verify_pc ...[2024-10-08 18:26:23.925779] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.439 [2024-10-08 18:26:23.943713] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:55.439 [2024-10-08 18:26:23.961469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.699 passed 00:15:55.699 Test: admin_create_io_qp_max_qps ...[2024-10-08 18:26:24.099823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.078 [2024-10-08 18:26:25.197701] nvme_ctrlr.c:5535:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:57.078 [2024-10-08 18:26:25.595698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.338 passed 00:15:57.338 Test: admin_create_io_sq_shared_cq ...[2024-10-08 18:26:25.734235] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.338 [2024-10-08 18:26:25.865684] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:57.598 [2024-10-08 18:26:25.902804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.598 passed 00:15:57.598 00:15:57.598 Run Summary: Type Total Ran Passed Failed Inactive 00:15:57.598 suites 1 1 n/a 0 0 00:15:57.598 tests 18 18 18 0 0 00:15:57.598 asserts 360 360 360 0 n/a 00:15:57.598 00:15:57.598 Elapsed time = 2.027 seconds 00:15:57.598 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1179504 00:15:57.598 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1179504 ']' 00:15:57.598 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1179504 00:15:57.598 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:57.599 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.599 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1179504 00:15:57.599 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.599 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.599 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1179504' 00:15:57.599 killing process with pid 1179504 00:15:57.599 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1179504 00:15:57.599 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1179504 00:15:58.169 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:58.169 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:58.169 00:15:58.169 real 0m7.967s 00:15:58.169 user 0m22.164s 00:15:58.169 sys 0m0.773s 00:15:58.169 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:58.169 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:58.169 ************************************ 00:15:58.169 END TEST nvmf_vfio_user_nvme_compliance 00:15:58.169 ************************************ 00:15:58.169 18:26:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:58.169 18:26:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:58.169 18:26:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:58.169 18:26:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:58.169 ************************************ 00:15:58.169 START TEST nvmf_vfio_user_fuzz 00:15:58.169 ************************************ 00:15:58.169 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:58.169 * Looking for test storage... 00:15:58.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:58.169 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:58.169 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:15:58.169 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:58.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.429 --rc genhtml_branch_coverage=1 00:15:58.429 --rc genhtml_function_coverage=1 00:15:58.429 --rc genhtml_legend=1 00:15:58.429 --rc geninfo_all_blocks=1 00:15:58.429 --rc geninfo_unexecuted_blocks=1 00:15:58.429 00:15:58.429 ' 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:58.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.429 --rc genhtml_branch_coverage=1 00:15:58.429 --rc genhtml_function_coverage=1 00:15:58.429 --rc genhtml_legend=1 00:15:58.429 --rc geninfo_all_blocks=1 00:15:58.429 --rc geninfo_unexecuted_blocks=1 00:15:58.429 00:15:58.429 ' 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:58.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.429 --rc genhtml_branch_coverage=1 00:15:58.429 --rc genhtml_function_coverage=1 00:15:58.429 --rc genhtml_legend=1 00:15:58.429 --rc geninfo_all_blocks=1 00:15:58.429 --rc geninfo_unexecuted_blocks=1 00:15:58.429 00:15:58.429 ' 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:58.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.429 --rc genhtml_branch_coverage=1 00:15:58.429 --rc genhtml_function_coverage=1 00:15:58.429 --rc genhtml_legend=1 00:15:58.429 --rc geninfo_all_blocks=1 00:15:58.429 --rc geninfo_unexecuted_blocks=1 00:15:58.429 00:15:58.429 ' 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.429 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:58.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1180496 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1180496' 00:15:58.430 Process pid: 1180496 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1180496 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1180496 ']' 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.430 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.339 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.339 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:16:00.339 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:00.908 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:00.908 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.908 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.908 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.908 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:00.908 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:00.908 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.908 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:01.169 malloc0 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:01.169 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:33.351 Fuzzing completed. Shutting down the fuzz application 00:16:33.351 00:16:33.351 Dumping successful admin opcodes: 00:16:33.351 8, 9, 10, 24, 00:16:33.351 Dumping successful io opcodes: 00:16:33.351 0, 00:16:33.351 NS: 0x200003a1ef00 I/O qp, Total commands completed: 233810, total successful commands: 902, random_seed: 220521152 00:16:33.351 NS: 0x200003a1ef00 admin qp, Total commands completed: 29856, total successful commands: 253, random_seed: 1601165568 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1180496 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1180496 ']' 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1180496 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1180496 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1180496' 00:16:33.351 killing process with pid 1180496 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1180496 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1180496 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:33.351 00:16:33.351 real 0m34.102s 00:16:33.351 user 0m34.280s 00:16:33.351 sys 0m24.621s 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.351 ************************************ 00:16:33.351 END TEST nvmf_vfio_user_fuzz 00:16:33.351 ************************************ 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:33.351 ************************************ 00:16:33.351 START TEST nvmf_auth_target 00:16:33.351 ************************************ 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:33.351 * Looking for test storage... 00:16:33.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.351 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:33.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.352 --rc genhtml_branch_coverage=1 00:16:33.352 --rc genhtml_function_coverage=1 00:16:33.352 --rc genhtml_legend=1 00:16:33.352 --rc geninfo_all_blocks=1 00:16:33.352 --rc geninfo_unexecuted_blocks=1 00:16:33.352 00:16:33.352 ' 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:33.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.352 --rc genhtml_branch_coverage=1 00:16:33.352 --rc genhtml_function_coverage=1 00:16:33.352 --rc genhtml_legend=1 00:16:33.352 --rc geninfo_all_blocks=1 00:16:33.352 --rc geninfo_unexecuted_blocks=1 00:16:33.352 00:16:33.352 ' 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:33.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.352 --rc genhtml_branch_coverage=1 00:16:33.352 --rc genhtml_function_coverage=1 00:16:33.352 --rc genhtml_legend=1 00:16:33.352 --rc geninfo_all_blocks=1 00:16:33.352 --rc geninfo_unexecuted_blocks=1 00:16:33.352 00:16:33.352 ' 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:33.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.352 --rc genhtml_branch_coverage=1 00:16:33.352 --rc genhtml_function_coverage=1 00:16:33.352 --rc genhtml_legend=1 00:16:33.352 --rc geninfo_all_blocks=1 00:16:33.352 --rc geninfo_unexecuted_blocks=1 00:16:33.352 00:16:33.352 ' 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:33.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:33.352 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.353 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:33.353 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.353 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:33.353 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:33.353 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:33.353 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.258 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:35.258 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:35.258 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:35.258 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:35.258 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:35.259 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:35.259 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:35.259 Found net devices under 0000:84:00.0: cvl_0_0 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:35.259 Found net devices under 0000:84:00.1: cvl_0_1 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:35.259 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:35.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:16:35.519 00:16:35.519 --- 10.0.0.2 ping statistics --- 00:16:35.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.519 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:35.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:16:35.519 00:16:35.519 --- 10.0.0.1 ping statistics --- 00:16:35.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.519 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1186094 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1186094 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1186094 ']' 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:35.519 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.087 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:36.087 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:36.087 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:36.087 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:36.087 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.087 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.087 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1186231 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=96d97e272446ddbc7cfd0eaf93c41088616371919487e461 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.nln 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 96d97e272446ddbc7cfd0eaf93c41088616371919487e461 0 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 96d97e272446ddbc7cfd0eaf93c41088616371919487e461 0 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=96d97e272446ddbc7cfd0eaf93c41088616371919487e461 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.nln 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.nln 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.nln 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b5c5f1cab2c6a4af3d7727a2621bbe45797a20a4c35cf90fca96ef65abd864c4 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.0i2 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b5c5f1cab2c6a4af3d7727a2621bbe45797a20a4c35cf90fca96ef65abd864c4 3 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b5c5f1cab2c6a4af3d7727a2621bbe45797a20a4c35cf90fca96ef65abd864c4 3 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b5c5f1cab2c6a4af3d7727a2621bbe45797a20a4c35cf90fca96ef65abd864c4 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:36.088 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.0i2 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.0i2 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.0i2 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=54667f6cb4c5fc32fd640479694f7aa2 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.dhl 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 54667f6cb4c5fc32fd640479694f7aa2 1 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 54667f6cb4c5fc32fd640479694f7aa2 1 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=54667f6cb4c5fc32fd640479694f7aa2 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.dhl 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.dhl 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.dhl 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=ede18d8a52e747483a4b474af12ce0ad73c32b927906b0f5 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.v2g 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key ede18d8a52e747483a4b474af12ce0ad73c32b927906b0f5 2 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 ede18d8a52e747483a4b474af12ce0ad73c32b927906b0f5 2 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=ede18d8a52e747483a4b474af12ce0ad73c32b927906b0f5 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.v2g 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.v2g 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.v2g 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=ae6b817ca42db01529e9480d90064e31d2b790623520b337 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.7n0 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key ae6b817ca42db01529e9480d90064e31d2b790623520b337 2 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 ae6b817ca42db01529e9480d90064e31d2b790623520b337 2 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=ae6b817ca42db01529e9480d90064e31d2b790623520b337 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:36.347 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.7n0 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.7n0 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.7n0 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=df77e04c4e84e9cb35bdf9606847cc53 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.NQ9 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key df77e04c4e84e9cb35bdf9606847cc53 1 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 df77e04c4e84e9cb35bdf9606847cc53 1 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=df77e04c4e84e9cb35bdf9606847cc53 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:36.607 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:36.607 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.NQ9 00:16:36.607 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.NQ9 00:16:36.607 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.NQ9 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=4f2725dd3b3d5b8d771528e1cbfab9f0c5eb419522b19594cee2113a125fed5d 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.lW9 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 4f2725dd3b3d5b8d771528e1cbfab9f0c5eb419522b19594cee2113a125fed5d 3 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 4f2725dd3b3d5b8d771528e1cbfab9f0c5eb419522b19594cee2113a125fed5d 3 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=4f2725dd3b3d5b8d771528e1cbfab9f0c5eb419522b19594cee2113a125fed5d 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.lW9 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.lW9 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.lW9 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1186094 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1186094 ']' 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.608 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.177 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.177 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:37.177 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1186231 /var/tmp/host.sock 00:16:37.177 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1186231 ']' 00:16:37.177 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:37.177 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:37.177 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:37.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:37.177 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:37.177 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.743 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:37.743 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:37.743 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:37.743 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.743 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.743 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.743 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:37.743 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nln 00:16:37.743 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.743 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.743 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.743 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.nln 00:16:37.743 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.nln 00:16:38.003 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.0i2 ]] 00:16:38.003 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0i2 00:16:38.003 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.003 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.003 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.003 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0i2 00:16:38.003 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0i2 00:16:38.572 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:38.572 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dhl 00:16:38.572 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.572 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.572 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.572 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dhl 00:16:38.572 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dhl 00:16:39.143 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.v2g ]] 00:16:39.143 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.v2g 00:16:39.143 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.143 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.143 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.143 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.v2g 00:16:39.143 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.v2g 00:16:40.080 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:40.080 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.7n0 00:16:40.080 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.080 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.080 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.080 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.7n0 00:16:40.080 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.7n0 00:16:40.338 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.NQ9 ]] 00:16:40.338 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NQ9 00:16:40.338 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.338 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.338 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.338 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NQ9 00:16:40.338 18:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NQ9 00:16:40.904 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:40.904 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lW9 00:16:40.904 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.904 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.904 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.904 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.lW9 00:16:40.904 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.lW9 00:16:41.163 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:41.163 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:41.163 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.163 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.163 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:41.163 18:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:41.730 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:41.730 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.730 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.730 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:41.730 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.730 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.730 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.730 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.730 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.730 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.730 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.730 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.730 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.298 00:16:42.298 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.298 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.298 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.557 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.557 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.557 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.557 18:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.557 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.557 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.557 { 00:16:42.557 "cntlid": 1, 00:16:42.557 "qid": 0, 00:16:42.557 "state": "enabled", 00:16:42.557 "thread": "nvmf_tgt_poll_group_000", 00:16:42.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:42.557 "listen_address": { 00:16:42.557 "trtype": "TCP", 00:16:42.557 "adrfam": "IPv4", 00:16:42.557 "traddr": "10.0.0.2", 00:16:42.557 "trsvcid": "4420" 00:16:42.557 }, 00:16:42.557 "peer_address": { 00:16:42.557 "trtype": "TCP", 00:16:42.557 "adrfam": "IPv4", 00:16:42.557 "traddr": "10.0.0.1", 00:16:42.557 "trsvcid": "45054" 00:16:42.557 }, 00:16:42.557 "auth": { 00:16:42.557 "state": "completed", 00:16:42.557 "digest": "sha256", 00:16:42.557 "dhgroup": "null" 00:16:42.557 } 00:16:42.557 } 00:16:42.557 ]' 00:16:42.557 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.557 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.557 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.816 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:42.816 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.816 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.816 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.816 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.385 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:16:43.385 18:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.289 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.856 00:16:45.856 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.856 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.856 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.114 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.114 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.114 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.114 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.114 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.114 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.114 { 00:16:46.114 "cntlid": 3, 00:16:46.114 "qid": 0, 00:16:46.114 "state": "enabled", 00:16:46.114 "thread": "nvmf_tgt_poll_group_000", 00:16:46.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:46.114 "listen_address": { 00:16:46.114 "trtype": "TCP", 00:16:46.114 "adrfam": "IPv4", 00:16:46.114 "traddr": "10.0.0.2", 00:16:46.114 "trsvcid": "4420" 00:16:46.115 }, 00:16:46.115 "peer_address": { 00:16:46.115 "trtype": "TCP", 00:16:46.115 "adrfam": "IPv4", 00:16:46.115 "traddr": "10.0.0.1", 00:16:46.115 "trsvcid": "45074" 00:16:46.115 }, 00:16:46.115 "auth": { 00:16:46.115 "state": "completed", 00:16:46.115 "digest": "sha256", 00:16:46.115 "dhgroup": "null" 00:16:46.115 } 00:16:46.115 } 00:16:46.115 ]' 00:16:46.115 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.115 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.115 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.375 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:46.375 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.375 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.375 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.375 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.943 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:16:46.943 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:16:48.843 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.843 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:48.843 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.843 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.843 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.843 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.843 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:48.843 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:49.410 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:49.410 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.410 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.410 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:49.410 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.410 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.410 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.410 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.410 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.410 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.410 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.410 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.410 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.979 00:16:49.979 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.980 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.980 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.238 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.238 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.238 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.238 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.238 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.238 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.238 { 00:16:50.238 "cntlid": 5, 00:16:50.238 "qid": 0, 00:16:50.238 "state": "enabled", 00:16:50.238 "thread": "nvmf_tgt_poll_group_000", 00:16:50.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:50.238 "listen_address": { 00:16:50.238 "trtype": "TCP", 00:16:50.238 "adrfam": "IPv4", 00:16:50.238 "traddr": "10.0.0.2", 00:16:50.238 "trsvcid": "4420" 00:16:50.238 }, 00:16:50.238 "peer_address": { 00:16:50.238 "trtype": "TCP", 00:16:50.238 "adrfam": "IPv4", 00:16:50.238 "traddr": "10.0.0.1", 00:16:50.238 "trsvcid": "51212" 00:16:50.238 }, 00:16:50.238 "auth": { 00:16:50.238 "state": "completed", 00:16:50.238 "digest": "sha256", 00:16:50.238 "dhgroup": "null" 00:16:50.238 } 00:16:50.238 } 00:16:50.238 ]' 00:16:50.238 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.238 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.238 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.496 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:50.496 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.496 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.496 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.496 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:16:51.065 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:16:52.977 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.977 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:52.977 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.977 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.977 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.977 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.977 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:52.977 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:53.548 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:53.548 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.548 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.548 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:53.548 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.548 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.548 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:53.548 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.548 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.548 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.548 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.548 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.548 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.118 00:16:54.118 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.118 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.118 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.689 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.689 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.689 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.689 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.689 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.689 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.689 { 00:16:54.689 "cntlid": 7, 00:16:54.689 "qid": 0, 00:16:54.689 "state": "enabled", 00:16:54.689 "thread": "nvmf_tgt_poll_group_000", 00:16:54.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:54.689 "listen_address": { 00:16:54.689 "trtype": "TCP", 00:16:54.689 "adrfam": "IPv4", 00:16:54.689 "traddr": "10.0.0.2", 00:16:54.689 "trsvcid": "4420" 00:16:54.689 }, 00:16:54.689 "peer_address": { 00:16:54.689 "trtype": "TCP", 00:16:54.689 "adrfam": "IPv4", 00:16:54.689 "traddr": "10.0.0.1", 00:16:54.689 "trsvcid": "51238" 00:16:54.689 }, 00:16:54.689 "auth": { 00:16:54.689 "state": "completed", 00:16:54.689 "digest": "sha256", 00:16:54.689 "dhgroup": "null" 00:16:54.689 } 00:16:54.689 } 00:16:54.689 ]' 00:16:54.689 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.689 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.689 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.949 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:54.949 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.949 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.949 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.949 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.517 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:16:55.517 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:16:57.424 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.424 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:57.424 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.424 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.424 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.424 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.424 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.424 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:57.424 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:57.690 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:57.690 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.690 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:57.690 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:57.690 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:57.690 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.690 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.690 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.690 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.690 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.690 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.690 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.690 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.638 00:16:58.638 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.638 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.638 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.899 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.899 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.899 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.899 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.899 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.899 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.899 { 00:16:58.899 "cntlid": 9, 00:16:58.899 "qid": 0, 00:16:58.899 "state": "enabled", 00:16:58.899 "thread": "nvmf_tgt_poll_group_000", 00:16:58.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:16:58.899 "listen_address": { 00:16:58.899 "trtype": "TCP", 00:16:58.899 "adrfam": "IPv4", 00:16:58.899 "traddr": "10.0.0.2", 00:16:58.899 "trsvcid": "4420" 00:16:58.899 }, 00:16:58.899 "peer_address": { 00:16:58.899 "trtype": "TCP", 00:16:58.899 "adrfam": "IPv4", 00:16:58.899 "traddr": "10.0.0.1", 00:16:58.899 "trsvcid": "54240" 00:16:58.899 }, 00:16:58.899 "auth": { 00:16:58.899 "state": "completed", 00:16:58.899 "digest": "sha256", 00:16:58.899 "dhgroup": "ffdhe2048" 00:16:58.899 } 00:16:58.899 } 00:16:58.899 ]' 00:16:58.899 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.899 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.899 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.899 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.899 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.160 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.160 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.160 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.420 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:16:59.420 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:17:01.340 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.340 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:01.340 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.340 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.340 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.340 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.340 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:01.340 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:01.601 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:01.601 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.601 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.601 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:01.601 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:01.601 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.601 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.601 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.601 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.601 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.601 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.601 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.601 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.203 00:17:02.203 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.203 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.203 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.483 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.483 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.483 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.483 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.483 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.483 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.483 { 00:17:02.483 "cntlid": 11, 00:17:02.483 "qid": 0, 00:17:02.483 "state": "enabled", 00:17:02.483 "thread": "nvmf_tgt_poll_group_000", 00:17:02.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:02.483 "listen_address": { 00:17:02.483 "trtype": "TCP", 00:17:02.483 "adrfam": "IPv4", 00:17:02.483 "traddr": "10.0.0.2", 00:17:02.483 "trsvcid": "4420" 00:17:02.483 }, 00:17:02.483 "peer_address": { 00:17:02.483 "trtype": "TCP", 00:17:02.483 "adrfam": "IPv4", 00:17:02.483 "traddr": "10.0.0.1", 00:17:02.483 "trsvcid": "54256" 00:17:02.483 }, 00:17:02.483 "auth": { 00:17:02.483 "state": "completed", 00:17:02.483 "digest": "sha256", 00:17:02.483 "dhgroup": "ffdhe2048" 00:17:02.483 } 00:17:02.483 } 00:17:02.483 ]' 00:17:02.483 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.743 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.743 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.743 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:02.743 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.743 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.743 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.743 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.682 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:17:03.682 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:17:05.592 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.592 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:05.592 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.592 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.592 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.592 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.592 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:05.592 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:05.853 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:05.853 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.853 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:05.853 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:05.853 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.853 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.853 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.853 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.853 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.853 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.853 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.853 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.853 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.795 00:17:06.795 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.795 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.795 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.365 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.366 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.366 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.366 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.366 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.366 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.366 { 00:17:07.366 "cntlid": 13, 00:17:07.366 "qid": 0, 00:17:07.366 "state": "enabled", 00:17:07.366 "thread": "nvmf_tgt_poll_group_000", 00:17:07.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:07.366 "listen_address": { 00:17:07.366 "trtype": "TCP", 00:17:07.366 "adrfam": "IPv4", 00:17:07.366 "traddr": "10.0.0.2", 00:17:07.366 "trsvcid": "4420" 00:17:07.366 }, 00:17:07.366 "peer_address": { 00:17:07.366 "trtype": "TCP", 00:17:07.366 "adrfam": "IPv4", 00:17:07.366 "traddr": "10.0.0.1", 00:17:07.366 "trsvcid": "41438" 00:17:07.366 }, 00:17:07.366 "auth": { 00:17:07.366 "state": "completed", 00:17:07.366 "digest": "sha256", 00:17:07.366 "dhgroup": "ffdhe2048" 00:17:07.366 } 00:17:07.366 } 00:17:07.366 ]' 00:17:07.366 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.366 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.366 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.366 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.366 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.626 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.626 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.626 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.886 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:17:07.886 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:17:09.793 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.793 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:09.793 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.793 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.793 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.793 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.793 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.793 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:10.361 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:10.361 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.361 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:10.361 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:10.361 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:10.361 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.361 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:10.361 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.361 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.361 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.361 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.361 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.361 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.620 00:17:10.620 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.620 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.621 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.191 { 00:17:11.191 "cntlid": 15, 00:17:11.191 "qid": 0, 00:17:11.191 "state": "enabled", 00:17:11.191 "thread": "nvmf_tgt_poll_group_000", 00:17:11.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:11.191 "listen_address": { 00:17:11.191 "trtype": "TCP", 00:17:11.191 "adrfam": "IPv4", 00:17:11.191 "traddr": "10.0.0.2", 00:17:11.191 "trsvcid": "4420" 00:17:11.191 }, 00:17:11.191 "peer_address": { 00:17:11.191 "trtype": "TCP", 00:17:11.191 "adrfam": "IPv4", 00:17:11.191 "traddr": "10.0.0.1", 00:17:11.191 "trsvcid": "41460" 00:17:11.191 }, 00:17:11.191 "auth": { 00:17:11.191 "state": "completed", 00:17:11.191 "digest": "sha256", 00:17:11.191 "dhgroup": "ffdhe2048" 00:17:11.191 } 00:17:11.191 } 00:17:11.191 ]' 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.191 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.132 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:17:12.132 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:17:13.509 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.509 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.509 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.509 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.769 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.769 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.769 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.769 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:13.769 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:14.337 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:14.337 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.337 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.337 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.337 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.337 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.337 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.337 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.337 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.337 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.337 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.337 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.338 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.906 00:17:14.906 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.906 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.906 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.476 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.476 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.476 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.476 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.476 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.476 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.476 { 00:17:15.476 "cntlid": 17, 00:17:15.476 "qid": 0, 00:17:15.476 "state": "enabled", 00:17:15.476 "thread": "nvmf_tgt_poll_group_000", 00:17:15.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:15.476 "listen_address": { 00:17:15.476 "trtype": "TCP", 00:17:15.476 "adrfam": "IPv4", 00:17:15.476 "traddr": "10.0.0.2", 00:17:15.476 "trsvcid": "4420" 00:17:15.476 }, 00:17:15.476 "peer_address": { 00:17:15.476 "trtype": "TCP", 00:17:15.476 "adrfam": "IPv4", 00:17:15.476 "traddr": "10.0.0.1", 00:17:15.476 "trsvcid": "41478" 00:17:15.476 }, 00:17:15.476 "auth": { 00:17:15.476 "state": "completed", 00:17:15.476 "digest": "sha256", 00:17:15.476 "dhgroup": "ffdhe3072" 00:17:15.476 } 00:17:15.476 } 00:17:15.476 ]' 00:17:15.476 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.476 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.476 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.476 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.476 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.736 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.736 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.736 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.305 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:17:16.305 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:17:18.215 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.215 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:18.215 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.215 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.215 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.215 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.215 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:18.215 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:19.154 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:19.154 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.154 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.154 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:19.154 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.154 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.154 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.154 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.154 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.154 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.154 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.154 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.154 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.413 00:17:19.672 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.672 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.672 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.933 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.933 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.933 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.933 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.933 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.933 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.933 { 00:17:19.933 "cntlid": 19, 00:17:19.933 "qid": 0, 00:17:19.933 "state": "enabled", 00:17:19.933 "thread": "nvmf_tgt_poll_group_000", 00:17:19.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:19.933 "listen_address": { 00:17:19.933 "trtype": "TCP", 00:17:19.933 "adrfam": "IPv4", 00:17:19.933 "traddr": "10.0.0.2", 00:17:19.933 "trsvcid": "4420" 00:17:19.933 }, 00:17:19.933 "peer_address": { 00:17:19.933 "trtype": "TCP", 00:17:19.933 "adrfam": "IPv4", 00:17:19.933 "traddr": "10.0.0.1", 00:17:19.933 "trsvcid": "35086" 00:17:19.933 }, 00:17:19.933 "auth": { 00:17:19.933 "state": "completed", 00:17:19.933 "digest": "sha256", 00:17:19.933 "dhgroup": "ffdhe3072" 00:17:19.933 } 00:17:19.933 } 00:17:19.933 ]' 00:17:19.933 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.933 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.933 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.933 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.933 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.192 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.192 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.192 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.760 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:17:20.760 18:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:17:22.669 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.669 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:22.669 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.669 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.669 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.669 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.669 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:22.669 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:22.928 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:22.928 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.928 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.928 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:22.928 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:22.928 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.928 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.928 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.928 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.928 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.928 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.928 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.928 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.869 00:17:23.869 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.869 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.869 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.128 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.128 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.128 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.128 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.128 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.128 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.128 { 00:17:24.128 "cntlid": 21, 00:17:24.128 "qid": 0, 00:17:24.128 "state": "enabled", 00:17:24.128 "thread": "nvmf_tgt_poll_group_000", 00:17:24.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:24.128 "listen_address": { 00:17:24.128 "trtype": "TCP", 00:17:24.128 "adrfam": "IPv4", 00:17:24.128 "traddr": "10.0.0.2", 00:17:24.128 "trsvcid": "4420" 00:17:24.128 }, 00:17:24.128 "peer_address": { 00:17:24.128 "trtype": "TCP", 00:17:24.128 "adrfam": "IPv4", 00:17:24.128 "traddr": "10.0.0.1", 00:17:24.128 "trsvcid": "35102" 00:17:24.128 }, 00:17:24.128 "auth": { 00:17:24.128 "state": "completed", 00:17:24.128 "digest": "sha256", 00:17:24.128 "dhgroup": "ffdhe3072" 00:17:24.128 } 00:17:24.128 } 00:17:24.128 ]' 00:17:24.128 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.128 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.128 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.388 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.388 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.388 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.388 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.388 18:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.648 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:17:24.648 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:17:26.553 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.554 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:26.554 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.554 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.554 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.554 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.554 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.554 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.123 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:27.123 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.123 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:27.123 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:27.123 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.123 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.123 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:27.123 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.123 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.123 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.123 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.123 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.123 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.060 00:17:28.060 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.060 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.060 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.334 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.334 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.334 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.334 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.334 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.334 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.334 { 00:17:28.334 "cntlid": 23, 00:17:28.334 "qid": 0, 00:17:28.334 "state": "enabled", 00:17:28.334 "thread": "nvmf_tgt_poll_group_000", 00:17:28.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:28.334 "listen_address": { 00:17:28.334 "trtype": "TCP", 00:17:28.334 "adrfam": "IPv4", 00:17:28.334 "traddr": "10.0.0.2", 00:17:28.334 "trsvcid": "4420" 00:17:28.334 }, 00:17:28.334 "peer_address": { 00:17:28.334 "trtype": "TCP", 00:17:28.334 "adrfam": "IPv4", 00:17:28.334 "traddr": "10.0.0.1", 00:17:28.334 "trsvcid": "42124" 00:17:28.334 }, 00:17:28.334 "auth": { 00:17:28.334 "state": "completed", 00:17:28.334 "digest": "sha256", 00:17:28.334 "dhgroup": "ffdhe3072" 00:17:28.334 } 00:17:28.334 } 00:17:28.334 ]' 00:17:28.334 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.334 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.334 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.334 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.334 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.605 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.605 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.605 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.175 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:17:29.175 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:17:31.083 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.083 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:31.083 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.083 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.083 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.083 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.083 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.083 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:31.083 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:31.343 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:31.343 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.343 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:31.343 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:31.343 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.343 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.343 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.343 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.343 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.343 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.343 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.343 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.343 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.294 00:17:32.294 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.294 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.294 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.866 { 00:17:32.866 "cntlid": 25, 00:17:32.866 "qid": 0, 00:17:32.866 "state": "enabled", 00:17:32.866 "thread": "nvmf_tgt_poll_group_000", 00:17:32.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:32.866 "listen_address": { 00:17:32.866 "trtype": "TCP", 00:17:32.866 "adrfam": "IPv4", 00:17:32.866 "traddr": "10.0.0.2", 00:17:32.866 "trsvcid": "4420" 00:17:32.866 }, 00:17:32.866 "peer_address": { 00:17:32.866 "trtype": "TCP", 00:17:32.866 "adrfam": "IPv4", 00:17:32.866 "traddr": "10.0.0.1", 00:17:32.866 "trsvcid": "42154" 00:17:32.866 }, 00:17:32.866 "auth": { 00:17:32.866 "state": "completed", 00:17:32.866 "digest": "sha256", 00:17:32.866 "dhgroup": "ffdhe4096" 00:17:32.866 } 00:17:32.866 } 00:17:32.866 ]' 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.866 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.810 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:17:33.810 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:17:35.720 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.720 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:35.720 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.720 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.720 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.720 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.720 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:35.720 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:35.720 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:35.720 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.720 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:35.720 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:35.720 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:35.720 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.720 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.720 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.720 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.720 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.720 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.720 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.720 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.292 00:17:36.292 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.292 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.292 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.863 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.863 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.863 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.863 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.863 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.863 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.863 { 00:17:36.863 "cntlid": 27, 00:17:36.863 "qid": 0, 00:17:36.863 "state": "enabled", 00:17:36.863 "thread": "nvmf_tgt_poll_group_000", 00:17:36.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:36.863 "listen_address": { 00:17:36.863 "trtype": "TCP", 00:17:36.863 "adrfam": "IPv4", 00:17:36.863 "traddr": "10.0.0.2", 00:17:36.863 "trsvcid": "4420" 00:17:36.863 }, 00:17:36.863 "peer_address": { 00:17:36.863 "trtype": "TCP", 00:17:36.863 "adrfam": "IPv4", 00:17:36.863 "traddr": "10.0.0.1", 00:17:36.863 "trsvcid": "60950" 00:17:36.863 }, 00:17:36.863 "auth": { 00:17:36.863 "state": "completed", 00:17:36.863 "digest": "sha256", 00:17:36.863 "dhgroup": "ffdhe4096" 00:17:36.863 } 00:17:36.863 } 00:17:36.863 ]' 00:17:36.863 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.863 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.863 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.123 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:37.123 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.123 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.123 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.123 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.383 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:17:37.383 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:17:39.290 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.290 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:39.290 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.290 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.290 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.290 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.290 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.290 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.551 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:39.551 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.551 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.551 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:39.551 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:39.551 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.551 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.551 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.551 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.551 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.551 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.551 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.551 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.489 00:17:40.490 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.490 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.490 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.056 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.056 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.056 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.056 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.056 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.056 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.056 { 00:17:41.056 "cntlid": 29, 00:17:41.056 "qid": 0, 00:17:41.056 "state": "enabled", 00:17:41.056 "thread": "nvmf_tgt_poll_group_000", 00:17:41.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:41.056 "listen_address": { 00:17:41.057 "trtype": "TCP", 00:17:41.057 "adrfam": "IPv4", 00:17:41.057 "traddr": "10.0.0.2", 00:17:41.057 "trsvcid": "4420" 00:17:41.057 }, 00:17:41.057 "peer_address": { 00:17:41.057 "trtype": "TCP", 00:17:41.057 "adrfam": "IPv4", 00:17:41.057 "traddr": "10.0.0.1", 00:17:41.057 "trsvcid": "60982" 00:17:41.057 }, 00:17:41.057 "auth": { 00:17:41.057 "state": "completed", 00:17:41.057 "digest": "sha256", 00:17:41.057 "dhgroup": "ffdhe4096" 00:17:41.057 } 00:17:41.057 } 00:17:41.057 ]' 00:17:41.057 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.057 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.057 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.057 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.057 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.057 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.057 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.057 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.625 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:17:41.625 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:17:43.529 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.529 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:43.530 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.530 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.530 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.530 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.530 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.530 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.788 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:43.788 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.788 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:43.788 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:43.788 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.788 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.788 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:43.788 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.788 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.788 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.788 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.788 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.789 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.726 00:17:44.726 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.726 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.726 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.985 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.985 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.985 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.985 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.985 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.985 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.985 { 00:17:44.985 "cntlid": 31, 00:17:44.985 "qid": 0, 00:17:44.985 "state": "enabled", 00:17:44.985 "thread": "nvmf_tgt_poll_group_000", 00:17:44.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:44.985 "listen_address": { 00:17:44.985 "trtype": "TCP", 00:17:44.985 "adrfam": "IPv4", 00:17:44.985 "traddr": "10.0.0.2", 00:17:44.985 "trsvcid": "4420" 00:17:44.985 }, 00:17:44.985 "peer_address": { 00:17:44.985 "trtype": "TCP", 00:17:44.985 "adrfam": "IPv4", 00:17:44.985 "traddr": "10.0.0.1", 00:17:44.985 "trsvcid": "32790" 00:17:44.985 }, 00:17:44.985 "auth": { 00:17:44.985 "state": "completed", 00:17:44.985 "digest": "sha256", 00:17:44.985 "dhgroup": "ffdhe4096" 00:17:44.985 } 00:17:44.985 } 00:17:44.985 ]' 00:17:44.985 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.244 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.244 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.244 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:45.244 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.244 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.244 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.244 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.814 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:17:45.815 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:17:47.726 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.726 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:47.726 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.726 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.726 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.726 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.726 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.726 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.726 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:48.296 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:48.296 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.296 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:48.296 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:48.296 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:48.296 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.296 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.296 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.296 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.296 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.296 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.296 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.296 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.678 00:17:49.678 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.678 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.678 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.248 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.248 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.248 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.248 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.248 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.248 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.249 { 00:17:50.249 "cntlid": 33, 00:17:50.249 "qid": 0, 00:17:50.249 "state": "enabled", 00:17:50.249 "thread": "nvmf_tgt_poll_group_000", 00:17:50.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:50.249 "listen_address": { 00:17:50.249 "trtype": "TCP", 00:17:50.249 "adrfam": "IPv4", 00:17:50.249 "traddr": "10.0.0.2", 00:17:50.249 "trsvcid": "4420" 00:17:50.249 }, 00:17:50.249 "peer_address": { 00:17:50.249 "trtype": "TCP", 00:17:50.249 "adrfam": "IPv4", 00:17:50.249 "traddr": "10.0.0.1", 00:17:50.249 "trsvcid": "38338" 00:17:50.249 }, 00:17:50.249 "auth": { 00:17:50.249 "state": "completed", 00:17:50.249 "digest": "sha256", 00:17:50.249 "dhgroup": "ffdhe6144" 00:17:50.249 } 00:17:50.249 } 00:17:50.249 ]' 00:17:50.249 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.249 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.249 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.249 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.249 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.249 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.249 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.249 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.191 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:17:51.191 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:17:53.127 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.127 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:53.127 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.127 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.127 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.127 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.127 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:53.127 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:53.388 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:53.388 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.388 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:53.388 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:53.388 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.388 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.388 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.388 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.388 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.388 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.388 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.388 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.388 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.770 00:17:54.770 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.770 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.770 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.338 { 00:17:55.338 "cntlid": 35, 00:17:55.338 "qid": 0, 00:17:55.338 "state": "enabled", 00:17:55.338 "thread": "nvmf_tgt_poll_group_000", 00:17:55.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:55.338 "listen_address": { 00:17:55.338 "trtype": "TCP", 00:17:55.338 "adrfam": "IPv4", 00:17:55.338 "traddr": "10.0.0.2", 00:17:55.338 "trsvcid": "4420" 00:17:55.338 }, 00:17:55.338 "peer_address": { 00:17:55.338 "trtype": "TCP", 00:17:55.338 "adrfam": "IPv4", 00:17:55.338 "traddr": "10.0.0.1", 00:17:55.338 "trsvcid": "38370" 00:17:55.338 }, 00:17:55.338 "auth": { 00:17:55.338 "state": "completed", 00:17:55.338 "digest": "sha256", 00:17:55.338 "dhgroup": "ffdhe6144" 00:17:55.338 } 00:17:55.338 } 00:17:55.338 ]' 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.338 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.906 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:17:55.906 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:17:57.809 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.809 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:57.809 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.809 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.809 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.809 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.809 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:57.809 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:58.380 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:58.380 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.380 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:58.380 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.380 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:58.380 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.380 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.380 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.380 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.380 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.380 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.380 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.380 18:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.337 00:17:59.337 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.337 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.337 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.904 { 00:17:59.904 "cntlid": 37, 00:17:59.904 "qid": 0, 00:17:59.904 "state": "enabled", 00:17:59.904 "thread": "nvmf_tgt_poll_group_000", 00:17:59.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:59.904 "listen_address": { 00:17:59.904 "trtype": "TCP", 00:17:59.904 "adrfam": "IPv4", 00:17:59.904 "traddr": "10.0.0.2", 00:17:59.904 "trsvcid": "4420" 00:17:59.904 }, 00:17:59.904 "peer_address": { 00:17:59.904 "trtype": "TCP", 00:17:59.904 "adrfam": "IPv4", 00:17:59.904 "traddr": "10.0.0.1", 00:17:59.904 "trsvcid": "59356" 00:17:59.904 }, 00:17:59.904 "auth": { 00:17:59.904 "state": "completed", 00:17:59.904 "digest": "sha256", 00:17:59.904 "dhgroup": "ffdhe6144" 00:17:59.904 } 00:17:59.904 } 00:17:59.904 ]' 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.904 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.473 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:18:00.473 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:18:02.380 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.380 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:02.380 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.380 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.380 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.380 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.380 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:02.380 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:02.947 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:02.947 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.947 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:02.947 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:02.947 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.947 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.947 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:02.947 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.947 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.947 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.947 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.947 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.947 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.323 00:18:04.323 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.323 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.323 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.889 { 00:18:04.889 "cntlid": 39, 00:18:04.889 "qid": 0, 00:18:04.889 "state": "enabled", 00:18:04.889 "thread": "nvmf_tgt_poll_group_000", 00:18:04.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:04.889 "listen_address": { 00:18:04.889 "trtype": "TCP", 00:18:04.889 "adrfam": "IPv4", 00:18:04.889 "traddr": "10.0.0.2", 00:18:04.889 "trsvcid": "4420" 00:18:04.889 }, 00:18:04.889 "peer_address": { 00:18:04.889 "trtype": "TCP", 00:18:04.889 "adrfam": "IPv4", 00:18:04.889 "traddr": "10.0.0.1", 00:18:04.889 "trsvcid": "59386" 00:18:04.889 }, 00:18:04.889 "auth": { 00:18:04.889 "state": "completed", 00:18:04.889 "digest": "sha256", 00:18:04.889 "dhgroup": "ffdhe6144" 00:18:04.889 } 00:18:04.889 } 00:18:04.889 ]' 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.889 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.456 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:18:05.456 18:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:18:07.356 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.356 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:07.356 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.356 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.356 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.356 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.356 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.356 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:07.356 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:07.615 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:07.615 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.615 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:07.615 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:07.615 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:07.615 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.615 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.615 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.615 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.615 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.615 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.615 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.615 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.517 00:18:09.517 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.517 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.517 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.775 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.775 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.775 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.775 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.775 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.775 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.775 { 00:18:09.775 "cntlid": 41, 00:18:09.775 "qid": 0, 00:18:09.775 "state": "enabled", 00:18:09.775 "thread": "nvmf_tgt_poll_group_000", 00:18:09.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:09.775 "listen_address": { 00:18:09.775 "trtype": "TCP", 00:18:09.775 "adrfam": "IPv4", 00:18:09.775 "traddr": "10.0.0.2", 00:18:09.775 "trsvcid": "4420" 00:18:09.775 }, 00:18:09.775 "peer_address": { 00:18:09.775 "trtype": "TCP", 00:18:09.775 "adrfam": "IPv4", 00:18:09.775 "traddr": "10.0.0.1", 00:18:09.775 "trsvcid": "49576" 00:18:09.775 }, 00:18:09.775 "auth": { 00:18:09.775 "state": "completed", 00:18:09.775 "digest": "sha256", 00:18:09.775 "dhgroup": "ffdhe8192" 00:18:09.775 } 00:18:09.775 } 00:18:09.775 ]' 00:18:09.775 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.775 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.775 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.775 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.775 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.037 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.037 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.037 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.295 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:18:10.295 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:18:12.202 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.202 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:12.202 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.202 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.202 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.202 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.202 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.202 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.462 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:12.462 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.462 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:12.462 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:12.462 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.462 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.462 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.462 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.462 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.462 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.462 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.462 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.462 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.002 00:18:15.002 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.002 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.002 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.002 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.003 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.003 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.003 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.003 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.003 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.003 { 00:18:15.003 "cntlid": 43, 00:18:15.003 "qid": 0, 00:18:15.003 "state": "enabled", 00:18:15.003 "thread": "nvmf_tgt_poll_group_000", 00:18:15.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:15.003 "listen_address": { 00:18:15.003 "trtype": "TCP", 00:18:15.003 "adrfam": "IPv4", 00:18:15.003 "traddr": "10.0.0.2", 00:18:15.003 "trsvcid": "4420" 00:18:15.003 }, 00:18:15.003 "peer_address": { 00:18:15.003 "trtype": "TCP", 00:18:15.003 "adrfam": "IPv4", 00:18:15.003 "traddr": "10.0.0.1", 00:18:15.003 "trsvcid": "49622" 00:18:15.003 }, 00:18:15.003 "auth": { 00:18:15.003 "state": "completed", 00:18:15.003 "digest": "sha256", 00:18:15.003 "dhgroup": "ffdhe8192" 00:18:15.003 } 00:18:15.003 } 00:18:15.003 ]' 00:18:15.003 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.003 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.003 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.262 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.262 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.262 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.262 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.262 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.831 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:18:15.831 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:18:17.737 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.737 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:17.737 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.737 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.737 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.737 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.737 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.737 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:18.307 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:18.307 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.307 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:18.307 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.307 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:18.307 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.307 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.307 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.307 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.307 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.307 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.307 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.307 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.217 00:18:20.217 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.217 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.217 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.785 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.785 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.785 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.785 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.785 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.785 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.785 { 00:18:20.785 "cntlid": 45, 00:18:20.785 "qid": 0, 00:18:20.785 "state": "enabled", 00:18:20.785 "thread": "nvmf_tgt_poll_group_000", 00:18:20.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:20.785 "listen_address": { 00:18:20.785 "trtype": "TCP", 00:18:20.785 "adrfam": "IPv4", 00:18:20.785 "traddr": "10.0.0.2", 00:18:20.785 "trsvcid": "4420" 00:18:20.785 }, 00:18:20.785 "peer_address": { 00:18:20.785 "trtype": "TCP", 00:18:20.785 "adrfam": "IPv4", 00:18:20.785 "traddr": "10.0.0.1", 00:18:20.785 "trsvcid": "60384" 00:18:20.785 }, 00:18:20.785 "auth": { 00:18:20.785 "state": "completed", 00:18:20.785 "digest": "sha256", 00:18:20.785 "dhgroup": "ffdhe8192" 00:18:20.785 } 00:18:20.785 } 00:18:20.785 ]' 00:18:20.786 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.786 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.786 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.786 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.786 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.044 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.044 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.044 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.302 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:18:21.302 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:18:23.204 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.204 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:23.204 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.204 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.204 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.204 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.204 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:23.204 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:23.462 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:23.462 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.462 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:23.462 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:23.462 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:23.462 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.462 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:23.462 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.462 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.462 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.462 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.462 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.462 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.365 00:18:25.365 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.365 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.365 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.623 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.623 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.623 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.623 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.881 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.881 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.881 { 00:18:25.881 "cntlid": 47, 00:18:25.881 "qid": 0, 00:18:25.881 "state": "enabled", 00:18:25.881 "thread": "nvmf_tgt_poll_group_000", 00:18:25.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:25.881 "listen_address": { 00:18:25.881 "trtype": "TCP", 00:18:25.881 "adrfam": "IPv4", 00:18:25.881 "traddr": "10.0.0.2", 00:18:25.881 "trsvcid": "4420" 00:18:25.881 }, 00:18:25.881 "peer_address": { 00:18:25.881 "trtype": "TCP", 00:18:25.881 "adrfam": "IPv4", 00:18:25.881 "traddr": "10.0.0.1", 00:18:25.881 "trsvcid": "60412" 00:18:25.881 }, 00:18:25.881 "auth": { 00:18:25.881 "state": "completed", 00:18:25.882 "digest": "sha256", 00:18:25.882 "dhgroup": "ffdhe8192" 00:18:25.882 } 00:18:25.882 } 00:18:25.882 ]' 00:18:25.882 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.882 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.882 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.882 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.882 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.882 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.882 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.882 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.817 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:18:26.817 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:18:28.721 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.721 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:28.721 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.721 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.721 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.722 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:28.722 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.722 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.722 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:28.722 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:28.985 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:28.985 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.985 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:28.985 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:28.985 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.985 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.985 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.985 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.985 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.256 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.256 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.256 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.256 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.854 00:18:29.854 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.854 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.854 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.421 { 00:18:30.421 "cntlid": 49, 00:18:30.421 "qid": 0, 00:18:30.421 "state": "enabled", 00:18:30.421 "thread": "nvmf_tgt_poll_group_000", 00:18:30.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:30.421 "listen_address": { 00:18:30.421 "trtype": "TCP", 00:18:30.421 "adrfam": "IPv4", 00:18:30.421 "traddr": "10.0.0.2", 00:18:30.421 "trsvcid": "4420" 00:18:30.421 }, 00:18:30.421 "peer_address": { 00:18:30.421 "trtype": "TCP", 00:18:30.421 "adrfam": "IPv4", 00:18:30.421 "traddr": "10.0.0.1", 00:18:30.421 "trsvcid": "37984" 00:18:30.421 }, 00:18:30.421 "auth": { 00:18:30.421 "state": "completed", 00:18:30.421 "digest": "sha384", 00:18:30.421 "dhgroup": "null" 00:18:30.421 } 00:18:30.421 } 00:18:30.421 ]' 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.421 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.359 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:18:31.359 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:18:32.735 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.994 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:32.994 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.994 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.994 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.994 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.994 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:32.994 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:33.252 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:33.252 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.252 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:33.252 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:33.253 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:33.253 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.253 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.253 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.253 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.253 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.253 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.253 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.253 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.820 00:18:33.820 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.820 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.820 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.387 { 00:18:34.387 "cntlid": 51, 00:18:34.387 "qid": 0, 00:18:34.387 "state": "enabled", 00:18:34.387 "thread": "nvmf_tgt_poll_group_000", 00:18:34.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:34.387 "listen_address": { 00:18:34.387 "trtype": "TCP", 00:18:34.387 "adrfam": "IPv4", 00:18:34.387 "traddr": "10.0.0.2", 00:18:34.387 "trsvcid": "4420" 00:18:34.387 }, 00:18:34.387 "peer_address": { 00:18:34.387 "trtype": "TCP", 00:18:34.387 "adrfam": "IPv4", 00:18:34.387 "traddr": "10.0.0.1", 00:18:34.387 "trsvcid": "38016" 00:18:34.387 }, 00:18:34.387 "auth": { 00:18:34.387 "state": "completed", 00:18:34.387 "digest": "sha384", 00:18:34.387 "dhgroup": "null" 00:18:34.387 } 00:18:34.387 } 00:18:34.387 ]' 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.387 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.329 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:18:35.329 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.236 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.496 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.496 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.496 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.496 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.754 00:18:37.754 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.754 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.754 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.323 { 00:18:38.323 "cntlid": 53, 00:18:38.323 "qid": 0, 00:18:38.323 "state": "enabled", 00:18:38.323 "thread": "nvmf_tgt_poll_group_000", 00:18:38.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:38.323 "listen_address": { 00:18:38.323 "trtype": "TCP", 00:18:38.323 "adrfam": "IPv4", 00:18:38.323 "traddr": "10.0.0.2", 00:18:38.323 "trsvcid": "4420" 00:18:38.323 }, 00:18:38.323 "peer_address": { 00:18:38.323 "trtype": "TCP", 00:18:38.323 "adrfam": "IPv4", 00:18:38.323 "traddr": "10.0.0.1", 00:18:38.323 "trsvcid": "41178" 00:18:38.323 }, 00:18:38.323 "auth": { 00:18:38.323 "state": "completed", 00:18:38.323 "digest": "sha384", 00:18:38.323 "dhgroup": "null" 00:18:38.323 } 00:18:38.323 } 00:18:38.323 ]' 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.323 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.263 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:18:39.263 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:18:41.168 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.168 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:41.168 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.168 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.168 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.168 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.168 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:41.169 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:41.428 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:41.428 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.428 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:41.428 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:41.428 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:41.428 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.428 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:41.428 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.428 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.428 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.428 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:41.428 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.428 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.688 00:18:41.947 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.947 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.947 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.206 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.206 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.206 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.206 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.206 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.206 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.206 { 00:18:42.206 "cntlid": 55, 00:18:42.206 "qid": 0, 00:18:42.206 "state": "enabled", 00:18:42.206 "thread": "nvmf_tgt_poll_group_000", 00:18:42.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:42.206 "listen_address": { 00:18:42.206 "trtype": "TCP", 00:18:42.206 "adrfam": "IPv4", 00:18:42.206 "traddr": "10.0.0.2", 00:18:42.206 "trsvcid": "4420" 00:18:42.206 }, 00:18:42.206 "peer_address": { 00:18:42.206 "trtype": "TCP", 00:18:42.206 "adrfam": "IPv4", 00:18:42.206 "traddr": "10.0.0.1", 00:18:42.206 "trsvcid": "41198" 00:18:42.206 }, 00:18:42.206 "auth": { 00:18:42.206 "state": "completed", 00:18:42.206 "digest": "sha384", 00:18:42.206 "dhgroup": "null" 00:18:42.206 } 00:18:42.206 } 00:18:42.206 ]' 00:18:42.206 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.206 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.206 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.206 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:42.206 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.466 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.466 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.466 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.043 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:18:43.043 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:18:44.953 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.953 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:44.953 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.953 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.953 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.953 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.953 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.953 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:44.953 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:45.213 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:45.213 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.213 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:45.213 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:45.213 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:45.213 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.213 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.213 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.213 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.471 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.472 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.472 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.472 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.729 00:18:45.729 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.729 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.729 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.299 { 00:18:46.299 "cntlid": 57, 00:18:46.299 "qid": 0, 00:18:46.299 "state": "enabled", 00:18:46.299 "thread": "nvmf_tgt_poll_group_000", 00:18:46.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:46.299 "listen_address": { 00:18:46.299 "trtype": "TCP", 00:18:46.299 "adrfam": "IPv4", 00:18:46.299 "traddr": "10.0.0.2", 00:18:46.299 "trsvcid": "4420" 00:18:46.299 }, 00:18:46.299 "peer_address": { 00:18:46.299 "trtype": "TCP", 00:18:46.299 "adrfam": "IPv4", 00:18:46.299 "traddr": "10.0.0.1", 00:18:46.299 "trsvcid": "41232" 00:18:46.299 }, 00:18:46.299 "auth": { 00:18:46.299 "state": "completed", 00:18:46.299 "digest": "sha384", 00:18:46.299 "dhgroup": "ffdhe2048" 00:18:46.299 } 00:18:46.299 } 00:18:46.299 ]' 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.299 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.868 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:18:46.868 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:18:48.776 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.776 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:48.776 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.776 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.776 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.776 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.776 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:48.776 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:49.035 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:49.035 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.035 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:49.035 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:49.035 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:49.035 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.035 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.035 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.035 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.295 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.295 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.295 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.295 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.554 00:18:49.554 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.554 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.554 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.124 { 00:18:50.124 "cntlid": 59, 00:18:50.124 "qid": 0, 00:18:50.124 "state": "enabled", 00:18:50.124 "thread": "nvmf_tgt_poll_group_000", 00:18:50.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:50.124 "listen_address": { 00:18:50.124 "trtype": "TCP", 00:18:50.124 "adrfam": "IPv4", 00:18:50.124 "traddr": "10.0.0.2", 00:18:50.124 "trsvcid": "4420" 00:18:50.124 }, 00:18:50.124 "peer_address": { 00:18:50.124 "trtype": "TCP", 00:18:50.124 "adrfam": "IPv4", 00:18:50.124 "traddr": "10.0.0.1", 00:18:50.124 "trsvcid": "56472" 00:18:50.124 }, 00:18:50.124 "auth": { 00:18:50.124 "state": "completed", 00:18:50.124 "digest": "sha384", 00:18:50.124 "dhgroup": "ffdhe2048" 00:18:50.124 } 00:18:50.124 } 00:18:50.124 ]' 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.124 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.063 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:18:51.063 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:18:52.443 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.443 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:52.443 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.443 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.443 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.443 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.443 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:52.443 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:53.383 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:53.383 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.383 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:53.383 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:53.383 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:53.383 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.383 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.383 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.383 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.383 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.383 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.383 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.383 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.643 00:18:53.643 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.643 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.643 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.584 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.584 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.584 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.584 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.584 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.584 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.584 { 00:18:54.584 "cntlid": 61, 00:18:54.584 "qid": 0, 00:18:54.584 "state": "enabled", 00:18:54.584 "thread": "nvmf_tgt_poll_group_000", 00:18:54.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:54.584 "listen_address": { 00:18:54.584 "trtype": "TCP", 00:18:54.584 "adrfam": "IPv4", 00:18:54.584 "traddr": "10.0.0.2", 00:18:54.584 "trsvcid": "4420" 00:18:54.584 }, 00:18:54.584 "peer_address": { 00:18:54.584 "trtype": "TCP", 00:18:54.584 "adrfam": "IPv4", 00:18:54.584 "traddr": "10.0.0.1", 00:18:54.584 "trsvcid": "56502" 00:18:54.584 }, 00:18:54.584 "auth": { 00:18:54.584 "state": "completed", 00:18:54.584 "digest": "sha384", 00:18:54.584 "dhgroup": "ffdhe2048" 00:18:54.584 } 00:18:54.584 } 00:18:54.584 ]' 00:18:54.584 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.584 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.584 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.584 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.584 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.584 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.584 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.584 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.153 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:18:55.153 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:18:56.531 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.531 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:56.531 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.531 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.531 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.531 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.531 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:56.531 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:57.128 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:57.129 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.129 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:57.129 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:57.129 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:57.129 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.129 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:57.129 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.129 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.129 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.129 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:57.129 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.129 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.724 00:18:57.724 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.724 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.724 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.293 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.293 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.293 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.293 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.293 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.293 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.293 { 00:18:58.293 "cntlid": 63, 00:18:58.293 "qid": 0, 00:18:58.293 "state": "enabled", 00:18:58.293 "thread": "nvmf_tgt_poll_group_000", 00:18:58.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:58.293 "listen_address": { 00:18:58.293 "trtype": "TCP", 00:18:58.293 "adrfam": "IPv4", 00:18:58.293 "traddr": "10.0.0.2", 00:18:58.293 "trsvcid": "4420" 00:18:58.293 }, 00:18:58.293 "peer_address": { 00:18:58.293 "trtype": "TCP", 00:18:58.293 "adrfam": "IPv4", 00:18:58.293 "traddr": "10.0.0.1", 00:18:58.293 "trsvcid": "35026" 00:18:58.293 }, 00:18:58.293 "auth": { 00:18:58.293 "state": "completed", 00:18:58.293 "digest": "sha384", 00:18:58.293 "dhgroup": "ffdhe2048" 00:18:58.293 } 00:18:58.293 } 00:18:58.293 ]' 00:18:58.293 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.293 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.294 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.294 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.554 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.554 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.554 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.554 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.124 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:18:59.124 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:19:01.029 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.029 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:01.029 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.029 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.029 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.029 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.029 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.030 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.030 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.290 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:01.290 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.290 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:01.290 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:01.290 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:01.290 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.290 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.290 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.290 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.290 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.290 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.290 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.290 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.230 00:19:02.230 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.230 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.230 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.488 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.488 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.488 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.488 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.488 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.488 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.488 { 00:19:02.488 "cntlid": 65, 00:19:02.488 "qid": 0, 00:19:02.488 "state": "enabled", 00:19:02.488 "thread": "nvmf_tgt_poll_group_000", 00:19:02.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:02.488 "listen_address": { 00:19:02.488 "trtype": "TCP", 00:19:02.488 "adrfam": "IPv4", 00:19:02.488 "traddr": "10.0.0.2", 00:19:02.488 "trsvcid": "4420" 00:19:02.488 }, 00:19:02.488 "peer_address": { 00:19:02.488 "trtype": "TCP", 00:19:02.488 "adrfam": "IPv4", 00:19:02.488 "traddr": "10.0.0.1", 00:19:02.488 "trsvcid": "35064" 00:19:02.488 }, 00:19:02.488 "auth": { 00:19:02.488 "state": "completed", 00:19:02.488 "digest": "sha384", 00:19:02.488 "dhgroup": "ffdhe3072" 00:19:02.488 } 00:19:02.488 } 00:19:02.488 ]' 00:19:02.488 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.488 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.488 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.488 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:02.488 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.751 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.751 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.751 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.013 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:19:03.013 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:19:04.917 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.176 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:05.176 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:05.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:05.745 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:05.745 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.745 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:05.745 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:05.745 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:05.745 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.745 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.746 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.746 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.746 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.746 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.746 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.746 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.312 00:19:06.312 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.312 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.312 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.880 { 00:19:06.880 "cntlid": 67, 00:19:06.880 "qid": 0, 00:19:06.880 "state": "enabled", 00:19:06.880 "thread": "nvmf_tgt_poll_group_000", 00:19:06.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:06.880 "listen_address": { 00:19:06.880 "trtype": "TCP", 00:19:06.880 "adrfam": "IPv4", 00:19:06.880 "traddr": "10.0.0.2", 00:19:06.880 "trsvcid": "4420" 00:19:06.880 }, 00:19:06.880 "peer_address": { 00:19:06.880 "trtype": "TCP", 00:19:06.880 "adrfam": "IPv4", 00:19:06.880 "traddr": "10.0.0.1", 00:19:06.880 "trsvcid": "51476" 00:19:06.880 }, 00:19:06.880 "auth": { 00:19:06.880 "state": "completed", 00:19:06.880 "digest": "sha384", 00:19:06.880 "dhgroup": "ffdhe3072" 00:19:06.880 } 00:19:06.880 } 00:19:06.880 ]' 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.880 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.817 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:19:07.817 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:19:09.199 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.199 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:09.199 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.199 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.460 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.460 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.460 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:09.460 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:10.028 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:10.028 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.028 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:10.028 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:10.028 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:10.028 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.028 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.028 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.028 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.028 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.028 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.028 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.028 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.597 00:19:10.597 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.597 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.597 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.855 { 00:19:10.855 "cntlid": 69, 00:19:10.855 "qid": 0, 00:19:10.855 "state": "enabled", 00:19:10.855 "thread": "nvmf_tgt_poll_group_000", 00:19:10.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:10.855 "listen_address": { 00:19:10.855 "trtype": "TCP", 00:19:10.855 "adrfam": "IPv4", 00:19:10.855 "traddr": "10.0.0.2", 00:19:10.855 "trsvcid": "4420" 00:19:10.855 }, 00:19:10.855 "peer_address": { 00:19:10.855 "trtype": "TCP", 00:19:10.855 "adrfam": "IPv4", 00:19:10.855 "traddr": "10.0.0.1", 00:19:10.855 "trsvcid": "51494" 00:19:10.855 }, 00:19:10.855 "auth": { 00:19:10.855 "state": "completed", 00:19:10.855 "digest": "sha384", 00:19:10.855 "dhgroup": "ffdhe3072" 00:19:10.855 } 00:19:10.855 } 00:19:10.855 ]' 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.855 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.425 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:19:11.425 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:19:12.805 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.805 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:12.805 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.805 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.805 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.805 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.805 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:12.806 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:13.375 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:13.375 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.376 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:13.376 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:13.376 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:13.376 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.376 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:13.376 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.376 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.376 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.376 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.376 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.376 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.944 00:19:13.944 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.944 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.944 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.203 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.203 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.203 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.203 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.461 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.461 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.461 { 00:19:14.461 "cntlid": 71, 00:19:14.461 "qid": 0, 00:19:14.461 "state": "enabled", 00:19:14.461 "thread": "nvmf_tgt_poll_group_000", 00:19:14.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:14.461 "listen_address": { 00:19:14.461 "trtype": "TCP", 00:19:14.461 "adrfam": "IPv4", 00:19:14.461 "traddr": "10.0.0.2", 00:19:14.461 "trsvcid": "4420" 00:19:14.461 }, 00:19:14.461 "peer_address": { 00:19:14.461 "trtype": "TCP", 00:19:14.461 "adrfam": "IPv4", 00:19:14.462 "traddr": "10.0.0.1", 00:19:14.462 "trsvcid": "51520" 00:19:14.462 }, 00:19:14.462 "auth": { 00:19:14.462 "state": "completed", 00:19:14.462 "digest": "sha384", 00:19:14.462 "dhgroup": "ffdhe3072" 00:19:14.462 } 00:19:14.462 } 00:19:14.462 ]' 00:19:14.462 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.462 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:14.462 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.462 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:14.462 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.462 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.462 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.462 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.400 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:19:15.401 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.310 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.247 00:19:18.247 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.247 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.247 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.507 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.507 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.507 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.507 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.507 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.507 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.507 { 00:19:18.507 "cntlid": 73, 00:19:18.507 "qid": 0, 00:19:18.507 "state": "enabled", 00:19:18.507 "thread": "nvmf_tgt_poll_group_000", 00:19:18.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:18.507 "listen_address": { 00:19:18.507 "trtype": "TCP", 00:19:18.507 "adrfam": "IPv4", 00:19:18.507 "traddr": "10.0.0.2", 00:19:18.507 "trsvcid": "4420" 00:19:18.507 }, 00:19:18.507 "peer_address": { 00:19:18.507 "trtype": "TCP", 00:19:18.507 "adrfam": "IPv4", 00:19:18.507 "traddr": "10.0.0.1", 00:19:18.507 "trsvcid": "38022" 00:19:18.507 }, 00:19:18.507 "auth": { 00:19:18.507 "state": "completed", 00:19:18.507 "digest": "sha384", 00:19:18.507 "dhgroup": "ffdhe4096" 00:19:18.507 } 00:19:18.507 } 00:19:18.507 ]' 00:19:18.507 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.507 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.507 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.507 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:18.508 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.508 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.508 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.508 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.446 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:19:19.446 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:19:21.359 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.359 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:21.359 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.359 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.359 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.359 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.359 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:21.359 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:21.929 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:21.929 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.929 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:21.929 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:21.929 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:21.929 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.929 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.929 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.929 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.929 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.929 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.929 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.929 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.496 00:19:22.496 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.496 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.496 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.435 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.435 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.435 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.435 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.436 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.436 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.436 { 00:19:23.436 "cntlid": 75, 00:19:23.436 "qid": 0, 00:19:23.436 "state": "enabled", 00:19:23.436 "thread": "nvmf_tgt_poll_group_000", 00:19:23.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:23.436 "listen_address": { 00:19:23.436 "trtype": "TCP", 00:19:23.436 "adrfam": "IPv4", 00:19:23.436 "traddr": "10.0.0.2", 00:19:23.436 "trsvcid": "4420" 00:19:23.436 }, 00:19:23.436 "peer_address": { 00:19:23.436 "trtype": "TCP", 00:19:23.436 "adrfam": "IPv4", 00:19:23.436 "traddr": "10.0.0.1", 00:19:23.436 "trsvcid": "38046" 00:19:23.436 }, 00:19:23.436 "auth": { 00:19:23.436 "state": "completed", 00:19:23.436 "digest": "sha384", 00:19:23.436 "dhgroup": "ffdhe4096" 00:19:23.436 } 00:19:23.436 } 00:19:23.436 ]' 00:19:23.436 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.436 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.436 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.436 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.436 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.436 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.436 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.436 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.695 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:19:23.695 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:19:25.616 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.617 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:25.617 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.617 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.617 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.617 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.617 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.617 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.232 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:26.232 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.232 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:26.232 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:26.232 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:26.232 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.232 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.232 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.232 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.232 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.232 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.232 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.232 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.800 00:19:26.800 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.800 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.800 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.370 { 00:19:27.370 "cntlid": 77, 00:19:27.370 "qid": 0, 00:19:27.370 "state": "enabled", 00:19:27.370 "thread": "nvmf_tgt_poll_group_000", 00:19:27.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:27.370 "listen_address": { 00:19:27.370 "trtype": "TCP", 00:19:27.370 "adrfam": "IPv4", 00:19:27.370 "traddr": "10.0.0.2", 00:19:27.370 "trsvcid": "4420" 00:19:27.370 }, 00:19:27.370 "peer_address": { 00:19:27.370 "trtype": "TCP", 00:19:27.370 "adrfam": "IPv4", 00:19:27.370 "traddr": "10.0.0.1", 00:19:27.370 "trsvcid": "41598" 00:19:27.370 }, 00:19:27.370 "auth": { 00:19:27.370 "state": "completed", 00:19:27.370 "digest": "sha384", 00:19:27.370 "dhgroup": "ffdhe4096" 00:19:27.370 } 00:19:27.370 } 00:19:27.370 ]' 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.370 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.938 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:19:27.938 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:19:29.845 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.845 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:29.845 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.845 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.845 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.845 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.845 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:29.845 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.413 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:30.413 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.413 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:30.413 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:30.413 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:30.413 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.413 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:30.413 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.413 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.413 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.413 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:30.413 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.413 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.981 00:19:30.981 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.981 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.981 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.548 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.548 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.548 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.548 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.548 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.548 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.548 { 00:19:31.548 "cntlid": 79, 00:19:31.548 "qid": 0, 00:19:31.548 "state": "enabled", 00:19:31.548 "thread": "nvmf_tgt_poll_group_000", 00:19:31.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:31.548 "listen_address": { 00:19:31.548 "trtype": "TCP", 00:19:31.548 "adrfam": "IPv4", 00:19:31.548 "traddr": "10.0.0.2", 00:19:31.548 "trsvcid": "4420" 00:19:31.548 }, 00:19:31.548 "peer_address": { 00:19:31.548 "trtype": "TCP", 00:19:31.548 "adrfam": "IPv4", 00:19:31.548 "traddr": "10.0.0.1", 00:19:31.548 "trsvcid": "41616" 00:19:31.548 }, 00:19:31.548 "auth": { 00:19:31.548 "state": "completed", 00:19:31.548 "digest": "sha384", 00:19:31.548 "dhgroup": "ffdhe4096" 00:19:31.548 } 00:19:31.548 } 00:19:31.548 ]' 00:19:31.548 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.806 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.806 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.806 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.806 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.806 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.806 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.806 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.375 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:19:32.375 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:19:34.278 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.279 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.215 00:19:35.215 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.215 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.215 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.782 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.782 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.782 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.782 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.782 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.782 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.782 { 00:19:35.782 "cntlid": 81, 00:19:35.782 "qid": 0, 00:19:35.782 "state": "enabled", 00:19:35.782 "thread": "nvmf_tgt_poll_group_000", 00:19:35.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:35.782 "listen_address": { 00:19:35.782 "trtype": "TCP", 00:19:35.782 "adrfam": "IPv4", 00:19:35.782 "traddr": "10.0.0.2", 00:19:35.782 "trsvcid": "4420" 00:19:35.782 }, 00:19:35.782 "peer_address": { 00:19:35.782 "trtype": "TCP", 00:19:35.782 "adrfam": "IPv4", 00:19:35.782 "traddr": "10.0.0.1", 00:19:35.782 "trsvcid": "41658" 00:19:35.782 }, 00:19:35.782 "auth": { 00:19:35.782 "state": "completed", 00:19:35.782 "digest": "sha384", 00:19:35.782 "dhgroup": "ffdhe6144" 00:19:35.782 } 00:19:35.782 } 00:19:35.782 ]' 00:19:35.782 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.782 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.782 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.782 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.782 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.042 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.042 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.042 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.611 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:19:36.611 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:19:38.518 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.518 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:38.518 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.518 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.518 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.518 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.518 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:38.518 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:38.518 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:38.518 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.518 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:38.518 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:38.518 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:38.518 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.519 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.519 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.519 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.519 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.519 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.519 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.519 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.456 00:19:39.456 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.456 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.456 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.715 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.715 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.715 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.715 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.715 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.715 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.715 { 00:19:39.715 "cntlid": 83, 00:19:39.715 "qid": 0, 00:19:39.715 "state": "enabled", 00:19:39.715 "thread": "nvmf_tgt_poll_group_000", 00:19:39.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:39.715 "listen_address": { 00:19:39.715 "trtype": "TCP", 00:19:39.715 "adrfam": "IPv4", 00:19:39.715 "traddr": "10.0.0.2", 00:19:39.715 "trsvcid": "4420" 00:19:39.715 }, 00:19:39.715 "peer_address": { 00:19:39.715 "trtype": "TCP", 00:19:39.715 "adrfam": "IPv4", 00:19:39.715 "traddr": "10.0.0.1", 00:19:39.715 "trsvcid": "40412" 00:19:39.715 }, 00:19:39.715 "auth": { 00:19:39.715 "state": "completed", 00:19:39.715 "digest": "sha384", 00:19:39.715 "dhgroup": "ffdhe6144" 00:19:39.715 } 00:19:39.715 } 00:19:39.715 ]' 00:19:39.715 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.715 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.715 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.715 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.715 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.971 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.971 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.972 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.231 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:19:40.231 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:19:42.143 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.143 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:42.143 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.143 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.143 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.143 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.143 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:42.143 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:42.711 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:42.711 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.711 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:42.711 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:42.711 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:42.711 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.711 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.711 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.711 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.711 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.711 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.711 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.711 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.090 00:19:44.090 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.090 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.090 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.661 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.661 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.661 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.661 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.661 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.661 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.661 { 00:19:44.661 "cntlid": 85, 00:19:44.661 "qid": 0, 00:19:44.661 "state": "enabled", 00:19:44.661 "thread": "nvmf_tgt_poll_group_000", 00:19:44.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:44.661 "listen_address": { 00:19:44.661 "trtype": "TCP", 00:19:44.661 "adrfam": "IPv4", 00:19:44.661 "traddr": "10.0.0.2", 00:19:44.661 "trsvcid": "4420" 00:19:44.661 }, 00:19:44.661 "peer_address": { 00:19:44.661 "trtype": "TCP", 00:19:44.661 "adrfam": "IPv4", 00:19:44.661 "traddr": "10.0.0.1", 00:19:44.661 "trsvcid": "40444" 00:19:44.661 }, 00:19:44.661 "auth": { 00:19:44.661 "state": "completed", 00:19:44.661 "digest": "sha384", 00:19:44.661 "dhgroup": "ffdhe6144" 00:19:44.661 } 00:19:44.661 } 00:19:44.661 ]' 00:19:44.661 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.661 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.661 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.661 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:44.661 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.661 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.661 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.661 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.232 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:19:45.232 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:19:47.770 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.770 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:47.770 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.770 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.770 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.770 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.770 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:47.770 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:48.029 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:48.029 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.029 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:48.029 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:48.029 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:48.030 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.030 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:48.030 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.030 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.030 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.030 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:48.030 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.030 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.410 00:19:49.410 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.410 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.410 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.669 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.669 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.669 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.669 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.669 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.669 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.669 { 00:19:49.669 "cntlid": 87, 00:19:49.669 "qid": 0, 00:19:49.669 "state": "enabled", 00:19:49.669 "thread": "nvmf_tgt_poll_group_000", 00:19:49.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:49.669 "listen_address": { 00:19:49.669 "trtype": "TCP", 00:19:49.669 "adrfam": "IPv4", 00:19:49.669 "traddr": "10.0.0.2", 00:19:49.669 "trsvcid": "4420" 00:19:49.669 }, 00:19:49.669 "peer_address": { 00:19:49.669 "trtype": "TCP", 00:19:49.669 "adrfam": "IPv4", 00:19:49.669 "traddr": "10.0.0.1", 00:19:49.669 "trsvcid": "55454" 00:19:49.669 }, 00:19:49.669 "auth": { 00:19:49.669 "state": "completed", 00:19:49.669 "digest": "sha384", 00:19:49.669 "dhgroup": "ffdhe6144" 00:19:49.669 } 00:19:49.669 } 00:19:49.669 ]' 00:19:49.669 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.669 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.669 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.669 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.669 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.933 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.933 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.933 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.193 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:19:50.193 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:19:52.103 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.103 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:52.103 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.103 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.103 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.103 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.103 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.103 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:52.104 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:53.042 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:53.042 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.042 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:53.042 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:53.042 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:53.043 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.043 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.043 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.043 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.043 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.043 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.043 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.043 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.951 00:19:54.951 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.951 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.951 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.951 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.951 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.951 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.951 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.951 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.951 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.951 { 00:19:54.951 "cntlid": 89, 00:19:54.951 "qid": 0, 00:19:54.951 "state": "enabled", 00:19:54.951 "thread": "nvmf_tgt_poll_group_000", 00:19:54.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:54.951 "listen_address": { 00:19:54.951 "trtype": "TCP", 00:19:54.951 "adrfam": "IPv4", 00:19:54.951 "traddr": "10.0.0.2", 00:19:54.951 "trsvcid": "4420" 00:19:54.951 }, 00:19:54.951 "peer_address": { 00:19:54.951 "trtype": "TCP", 00:19:54.951 "adrfam": "IPv4", 00:19:54.951 "traddr": "10.0.0.1", 00:19:54.951 "trsvcid": "55476" 00:19:54.951 }, 00:19:54.951 "auth": { 00:19:54.951 "state": "completed", 00:19:54.951 "digest": "sha384", 00:19:54.951 "dhgroup": "ffdhe8192" 00:19:54.951 } 00:19:54.951 } 00:19:54.951 ]' 00:19:54.952 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.214 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.214 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.214 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.214 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.214 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.214 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.214 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.837 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:19:55.837 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:19:57.745 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.745 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:57.745 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.745 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.745 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.745 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.745 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:57.745 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:58.004 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:58.004 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.004 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:58.004 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.004 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:58.004 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.004 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.004 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.004 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.004 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.004 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.004 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.004 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.915 00:19:59.915 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.916 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.916 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.485 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.485 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.485 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.485 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.485 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.485 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.485 { 00:20:00.485 "cntlid": 91, 00:20:00.485 "qid": 0, 00:20:00.485 "state": "enabled", 00:20:00.485 "thread": "nvmf_tgt_poll_group_000", 00:20:00.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:00.485 "listen_address": { 00:20:00.485 "trtype": "TCP", 00:20:00.485 "adrfam": "IPv4", 00:20:00.485 "traddr": "10.0.0.2", 00:20:00.485 "trsvcid": "4420" 00:20:00.485 }, 00:20:00.485 "peer_address": { 00:20:00.485 "trtype": "TCP", 00:20:00.485 "adrfam": "IPv4", 00:20:00.486 "traddr": "10.0.0.1", 00:20:00.486 "trsvcid": "58488" 00:20:00.486 }, 00:20:00.486 "auth": { 00:20:00.486 "state": "completed", 00:20:00.486 "digest": "sha384", 00:20:00.486 "dhgroup": "ffdhe8192" 00:20:00.486 } 00:20:00.486 } 00:20:00.486 ]' 00:20:00.486 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.486 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.486 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.486 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.486 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.486 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.486 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.486 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.424 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:20:01.424 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:20:03.333 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.333 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:03.333 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.333 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.333 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.334 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.334 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.334 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.593 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:03.593 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.593 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.593 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:03.593 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:03.593 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.593 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.593 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.593 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.593 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.593 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.593 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.593 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.501 00:20:05.501 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.501 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.501 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.760 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.761 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.761 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.761 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.761 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.761 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.761 { 00:20:05.761 "cntlid": 93, 00:20:05.761 "qid": 0, 00:20:05.761 "state": "enabled", 00:20:05.761 "thread": "nvmf_tgt_poll_group_000", 00:20:05.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:05.761 "listen_address": { 00:20:05.761 "trtype": "TCP", 00:20:05.761 "adrfam": "IPv4", 00:20:05.761 "traddr": "10.0.0.2", 00:20:05.761 "trsvcid": "4420" 00:20:05.761 }, 00:20:05.761 "peer_address": { 00:20:05.761 "trtype": "TCP", 00:20:05.761 "adrfam": "IPv4", 00:20:05.761 "traddr": "10.0.0.1", 00:20:05.761 "trsvcid": "58522" 00:20:05.761 }, 00:20:05.761 "auth": { 00:20:05.761 "state": "completed", 00:20:05.761 "digest": "sha384", 00:20:05.761 "dhgroup": "ffdhe8192" 00:20:05.761 } 00:20:05.761 } 00:20:05.761 ]' 00:20:05.761 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.021 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.021 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.021 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:06.021 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.021 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.021 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.021 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.962 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:20:06.962 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:20:08.343 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.343 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:08.343 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.343 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.343 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.343 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.343 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:08.343 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:08.911 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:08.911 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.911 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:08.911 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:08.911 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:08.911 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.911 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:08.911 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.911 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.911 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.911 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:08.911 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.911 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.823 00:20:10.823 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.823 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.823 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.391 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.391 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.391 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.391 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.391 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.391 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.391 { 00:20:11.391 "cntlid": 95, 00:20:11.391 "qid": 0, 00:20:11.391 "state": "enabled", 00:20:11.391 "thread": "nvmf_tgt_poll_group_000", 00:20:11.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:11.391 "listen_address": { 00:20:11.391 "trtype": "TCP", 00:20:11.391 "adrfam": "IPv4", 00:20:11.391 "traddr": "10.0.0.2", 00:20:11.391 "trsvcid": "4420" 00:20:11.391 }, 00:20:11.391 "peer_address": { 00:20:11.391 "trtype": "TCP", 00:20:11.391 "adrfam": "IPv4", 00:20:11.391 "traddr": "10.0.0.1", 00:20:11.391 "trsvcid": "56686" 00:20:11.391 }, 00:20:11.391 "auth": { 00:20:11.391 "state": "completed", 00:20:11.391 "digest": "sha384", 00:20:11.391 "dhgroup": "ffdhe8192" 00:20:11.391 } 00:20:11.391 } 00:20:11.391 ]' 00:20:11.391 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.391 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.391 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.391 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.392 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.392 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.392 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.392 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.961 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:20:11.961 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:20:13.869 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.869 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:13.869 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.869 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.869 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.869 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:13.869 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.869 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.869 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:13.869 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:14.437 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:14.437 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.437 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:14.438 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:14.438 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:14.438 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.438 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.438 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.438 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.438 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.438 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.438 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.438 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.696 00:20:14.696 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.696 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.696 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.262 { 00:20:15.262 "cntlid": 97, 00:20:15.262 "qid": 0, 00:20:15.262 "state": "enabled", 00:20:15.262 "thread": "nvmf_tgt_poll_group_000", 00:20:15.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:15.262 "listen_address": { 00:20:15.262 "trtype": "TCP", 00:20:15.262 "adrfam": "IPv4", 00:20:15.262 "traddr": "10.0.0.2", 00:20:15.262 "trsvcid": "4420" 00:20:15.262 }, 00:20:15.262 "peer_address": { 00:20:15.262 "trtype": "TCP", 00:20:15.262 "adrfam": "IPv4", 00:20:15.262 "traddr": "10.0.0.1", 00:20:15.262 "trsvcid": "56702" 00:20:15.262 }, 00:20:15.262 "auth": { 00:20:15.262 "state": "completed", 00:20:15.262 "digest": "sha512", 00:20:15.262 "dhgroup": "null" 00:20:15.262 } 00:20:15.262 } 00:20:15.262 ]' 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.262 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:20:15.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:20:17.738 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.738 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:17.738 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.738 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.738 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.738 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.738 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:17.738 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:18.308 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:18.309 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.309 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:18.309 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:18.309 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:18.309 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.309 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.309 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.309 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.309 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.309 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.309 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.309 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.876 00:20:18.876 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.876 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.876 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.134 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.134 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.134 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.134 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.134 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.134 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.134 { 00:20:19.134 "cntlid": 99, 00:20:19.134 "qid": 0, 00:20:19.134 "state": "enabled", 00:20:19.134 "thread": "nvmf_tgt_poll_group_000", 00:20:19.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:19.134 "listen_address": { 00:20:19.134 "trtype": "TCP", 00:20:19.134 "adrfam": "IPv4", 00:20:19.134 "traddr": "10.0.0.2", 00:20:19.134 "trsvcid": "4420" 00:20:19.134 }, 00:20:19.134 "peer_address": { 00:20:19.134 "trtype": "TCP", 00:20:19.134 "adrfam": "IPv4", 00:20:19.134 "traddr": "10.0.0.1", 00:20:19.134 "trsvcid": "39586" 00:20:19.134 }, 00:20:19.134 "auth": { 00:20:19.134 "state": "completed", 00:20:19.134 "digest": "sha512", 00:20:19.134 "dhgroup": "null" 00:20:19.134 } 00:20:19.134 } 00:20:19.134 ]' 00:20:19.134 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.393 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.393 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.393 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:19.393 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.393 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.393 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.393 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.961 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:20:19.961 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:20:21.869 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.869 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:21.869 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.869 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.869 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.869 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.869 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:21.869 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:22.127 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:22.127 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.127 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:22.127 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:22.127 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:22.127 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.127 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.127 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.127 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.127 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.127 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.127 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.127 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.696 00:20:22.696 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.696 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.696 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.957 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.957 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.957 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.957 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.957 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.957 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.957 { 00:20:22.957 "cntlid": 101, 00:20:22.957 "qid": 0, 00:20:22.957 "state": "enabled", 00:20:22.957 "thread": "nvmf_tgt_poll_group_000", 00:20:22.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:22.957 "listen_address": { 00:20:22.957 "trtype": "TCP", 00:20:22.957 "adrfam": "IPv4", 00:20:22.957 "traddr": "10.0.0.2", 00:20:22.957 "trsvcid": "4420" 00:20:22.957 }, 00:20:22.957 "peer_address": { 00:20:22.957 "trtype": "TCP", 00:20:22.957 "adrfam": "IPv4", 00:20:22.957 "traddr": "10.0.0.1", 00:20:22.957 "trsvcid": "39618" 00:20:22.957 }, 00:20:22.957 "auth": { 00:20:22.957 "state": "completed", 00:20:22.957 "digest": "sha512", 00:20:22.957 "dhgroup": "null" 00:20:22.957 } 00:20:22.957 } 00:20:22.957 ]' 00:20:22.957 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.217 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.217 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.217 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:23.217 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.217 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.217 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.217 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.784 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:20:23.785 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:20:25.744 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.744 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:25.744 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.744 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.744 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.744 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.744 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:25.744 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:25.744 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:25.744 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.744 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:25.744 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:25.744 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.744 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.744 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:25.744 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.744 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.744 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.744 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.744 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.745 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.311 00:20:26.311 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.311 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.311 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.570 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.570 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.570 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.570 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.570 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.570 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.570 { 00:20:26.570 "cntlid": 103, 00:20:26.570 "qid": 0, 00:20:26.570 "state": "enabled", 00:20:26.570 "thread": "nvmf_tgt_poll_group_000", 00:20:26.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:26.570 "listen_address": { 00:20:26.570 "trtype": "TCP", 00:20:26.570 "adrfam": "IPv4", 00:20:26.570 "traddr": "10.0.0.2", 00:20:26.570 "trsvcid": "4420" 00:20:26.570 }, 00:20:26.570 "peer_address": { 00:20:26.570 "trtype": "TCP", 00:20:26.570 "adrfam": "IPv4", 00:20:26.570 "traddr": "10.0.0.1", 00:20:26.570 "trsvcid": "52816" 00:20:26.570 }, 00:20:26.570 "auth": { 00:20:26.570 "state": "completed", 00:20:26.570 "digest": "sha512", 00:20:26.570 "dhgroup": "null" 00:20:26.570 } 00:20:26.570 } 00:20:26.570 ]' 00:20:26.570 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.570 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.570 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.829 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:26.829 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.829 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.829 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.829 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.397 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:20:27.397 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:20:29.302 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.302 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:29.302 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.302 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.302 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.302 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.303 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.872 00:20:29.872 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.872 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.872 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.439 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.439 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.439 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.439 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.439 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.439 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.439 { 00:20:30.439 "cntlid": 105, 00:20:30.439 "qid": 0, 00:20:30.439 "state": "enabled", 00:20:30.439 "thread": "nvmf_tgt_poll_group_000", 00:20:30.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:30.439 "listen_address": { 00:20:30.439 "trtype": "TCP", 00:20:30.439 "adrfam": "IPv4", 00:20:30.439 "traddr": "10.0.0.2", 00:20:30.439 "trsvcid": "4420" 00:20:30.439 }, 00:20:30.439 "peer_address": { 00:20:30.439 "trtype": "TCP", 00:20:30.439 "adrfam": "IPv4", 00:20:30.439 "traddr": "10.0.0.1", 00:20:30.439 "trsvcid": "52842" 00:20:30.439 }, 00:20:30.439 "auth": { 00:20:30.439 "state": "completed", 00:20:30.439 "digest": "sha512", 00:20:30.439 "dhgroup": "ffdhe2048" 00:20:30.439 } 00:20:30.439 } 00:20:30.439 ]' 00:20:30.439 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.439 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.439 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.699 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.699 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.699 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.699 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.699 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.266 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:20:31.266 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:20:33.174 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.174 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:33.174 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.174 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.174 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.174 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.174 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:33.174 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:33.741 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:33.741 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.741 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:33.741 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:33.741 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:33.741 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.741 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.741 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.741 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.741 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.741 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.741 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.741 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.000 00:20:34.000 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.000 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.000 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.568 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.568 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.568 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.568 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.568 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.568 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.568 { 00:20:34.568 "cntlid": 107, 00:20:34.568 "qid": 0, 00:20:34.568 "state": "enabled", 00:20:34.568 "thread": "nvmf_tgt_poll_group_000", 00:20:34.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:34.568 "listen_address": { 00:20:34.568 "trtype": "TCP", 00:20:34.568 "adrfam": "IPv4", 00:20:34.568 "traddr": "10.0.0.2", 00:20:34.568 "trsvcid": "4420" 00:20:34.568 }, 00:20:34.568 "peer_address": { 00:20:34.568 "trtype": "TCP", 00:20:34.568 "adrfam": "IPv4", 00:20:34.568 "traddr": "10.0.0.1", 00:20:34.568 "trsvcid": "52864" 00:20:34.568 }, 00:20:34.568 "auth": { 00:20:34.568 "state": "completed", 00:20:34.568 "digest": "sha512", 00:20:34.568 "dhgroup": "ffdhe2048" 00:20:34.568 } 00:20:34.568 } 00:20:34.568 ]' 00:20:34.568 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.568 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.568 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.568 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.568 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.568 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.568 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.568 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.136 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:20:35.136 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:20:37.046 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.046 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:37.046 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.046 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.046 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.046 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.046 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:37.046 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:37.984 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:37.984 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.984 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:37.984 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:37.984 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:37.984 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.984 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.984 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.984 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.984 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.984 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.984 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.984 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.242 00:20:38.242 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.242 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.242 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.501 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.501 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.501 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.501 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.502 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.502 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.502 { 00:20:38.502 "cntlid": 109, 00:20:38.502 "qid": 0, 00:20:38.502 "state": "enabled", 00:20:38.502 "thread": "nvmf_tgt_poll_group_000", 00:20:38.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:38.502 "listen_address": { 00:20:38.502 "trtype": "TCP", 00:20:38.502 "adrfam": "IPv4", 00:20:38.502 "traddr": "10.0.0.2", 00:20:38.502 "trsvcid": "4420" 00:20:38.502 }, 00:20:38.502 "peer_address": { 00:20:38.502 "trtype": "TCP", 00:20:38.502 "adrfam": "IPv4", 00:20:38.502 "traddr": "10.0.0.1", 00:20:38.502 "trsvcid": "40158" 00:20:38.502 }, 00:20:38.502 "auth": { 00:20:38.502 "state": "completed", 00:20:38.502 "digest": "sha512", 00:20:38.502 "dhgroup": "ffdhe2048" 00:20:38.502 } 00:20:38.502 } 00:20:38.502 ]' 00:20:38.760 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.760 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.760 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.760 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:38.760 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.760 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.760 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.760 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.326 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:20:39.326 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:20:41.234 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.234 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:41.234 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.234 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.234 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.234 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.234 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:41.234 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:41.804 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:41.804 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.804 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:41.804 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:41.804 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:41.804 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.804 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:41.804 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.804 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.804 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.804 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.804 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.804 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.740 00:20:42.740 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.740 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.740 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.999 { 00:20:42.999 "cntlid": 111, 00:20:42.999 "qid": 0, 00:20:42.999 "state": "enabled", 00:20:42.999 "thread": "nvmf_tgt_poll_group_000", 00:20:42.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:42.999 "listen_address": { 00:20:42.999 "trtype": "TCP", 00:20:42.999 "adrfam": "IPv4", 00:20:42.999 "traddr": "10.0.0.2", 00:20:42.999 "trsvcid": "4420" 00:20:42.999 }, 00:20:42.999 "peer_address": { 00:20:42.999 "trtype": "TCP", 00:20:42.999 "adrfam": "IPv4", 00:20:42.999 "traddr": "10.0.0.1", 00:20:42.999 "trsvcid": "40184" 00:20:42.999 }, 00:20:42.999 "auth": { 00:20:42.999 "state": "completed", 00:20:42.999 "digest": "sha512", 00:20:42.999 "dhgroup": "ffdhe2048" 00:20:42.999 } 00:20:42.999 } 00:20:42.999 ]' 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.999 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.938 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:20:43.938 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:20:45.843 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.843 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:45.843 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.843 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.843 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.843 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.843 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.843 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:45.843 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:45.843 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:45.843 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.843 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:45.843 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:45.843 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.843 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.843 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.843 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.843 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.843 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.843 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.843 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.843 18:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.781 00:20:46.782 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.782 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.782 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.351 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.351 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.351 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.351 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.351 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.351 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.351 { 00:20:47.351 "cntlid": 113, 00:20:47.351 "qid": 0, 00:20:47.351 "state": "enabled", 00:20:47.351 "thread": "nvmf_tgt_poll_group_000", 00:20:47.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:47.351 "listen_address": { 00:20:47.351 "trtype": "TCP", 00:20:47.351 "adrfam": "IPv4", 00:20:47.351 "traddr": "10.0.0.2", 00:20:47.351 "trsvcid": "4420" 00:20:47.351 }, 00:20:47.351 "peer_address": { 00:20:47.351 "trtype": "TCP", 00:20:47.351 "adrfam": "IPv4", 00:20:47.351 "traddr": "10.0.0.1", 00:20:47.351 "trsvcid": "39312" 00:20:47.351 }, 00:20:47.351 "auth": { 00:20:47.351 "state": "completed", 00:20:47.351 "digest": "sha512", 00:20:47.351 "dhgroup": "ffdhe3072" 00:20:47.351 } 00:20:47.351 } 00:20:47.351 ]' 00:20:47.351 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.351 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.351 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.612 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:47.612 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.612 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.612 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.612 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.181 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:20:48.181 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:20:50.089 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.089 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:50.089 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.089 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.089 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.089 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.089 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.089 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.348 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:50.348 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.348 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.348 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:50.348 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:50.348 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.348 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.348 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.348 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.348 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.349 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.349 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.349 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.918 00:20:50.918 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.918 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.918 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.488 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.488 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.488 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.488 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.488 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.488 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.488 { 00:20:51.488 "cntlid": 115, 00:20:51.488 "qid": 0, 00:20:51.488 "state": "enabled", 00:20:51.488 "thread": "nvmf_tgt_poll_group_000", 00:20:51.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:51.488 "listen_address": { 00:20:51.488 "trtype": "TCP", 00:20:51.488 "adrfam": "IPv4", 00:20:51.488 "traddr": "10.0.0.2", 00:20:51.488 "trsvcid": "4420" 00:20:51.488 }, 00:20:51.488 "peer_address": { 00:20:51.488 "trtype": "TCP", 00:20:51.488 "adrfam": "IPv4", 00:20:51.488 "traddr": "10.0.0.1", 00:20:51.488 "trsvcid": "39346" 00:20:51.488 }, 00:20:51.488 "auth": { 00:20:51.488 "state": "completed", 00:20:51.488 "digest": "sha512", 00:20:51.488 "dhgroup": "ffdhe3072" 00:20:51.488 } 00:20:51.488 } 00:20:51.488 ]' 00:20:51.488 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.488 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.488 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.488 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.488 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.488 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.488 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.488 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.427 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:20:52.427 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:20:53.805 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.805 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:53.805 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.805 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.805 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.805 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.805 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.805 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:54.432 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:54.433 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.433 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:54.433 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:54.433 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:54.433 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.433 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.433 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.433 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.433 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.433 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.433 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.433 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.999 00:20:54.999 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.000 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.000 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.567 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.567 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.567 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.567 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.567 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.567 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.567 { 00:20:55.567 "cntlid": 117, 00:20:55.567 "qid": 0, 00:20:55.567 "state": "enabled", 00:20:55.567 "thread": "nvmf_tgt_poll_group_000", 00:20:55.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:55.567 "listen_address": { 00:20:55.567 "trtype": "TCP", 00:20:55.567 "adrfam": "IPv4", 00:20:55.567 "traddr": "10.0.0.2", 00:20:55.567 "trsvcid": "4420" 00:20:55.567 }, 00:20:55.567 "peer_address": { 00:20:55.567 "trtype": "TCP", 00:20:55.567 "adrfam": "IPv4", 00:20:55.567 "traddr": "10.0.0.1", 00:20:55.567 "trsvcid": "39370" 00:20:55.567 }, 00:20:55.567 "auth": { 00:20:55.567 "state": "completed", 00:20:55.567 "digest": "sha512", 00:20:55.567 "dhgroup": "ffdhe3072" 00:20:55.567 } 00:20:55.567 } 00:20:55.567 ]' 00:20:55.567 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.567 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.567 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.826 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:55.826 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.826 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.826 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.826 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.393 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:20:56.393 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:20:57.772 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.772 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:57.772 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.772 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.772 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.772 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.772 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:57.772 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:58.341 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:58.341 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.341 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.341 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:58.341 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.341 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.341 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:58.341 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.341 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.341 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.341 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.341 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.341 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.600 00:20:58.860 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.860 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.860 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.429 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.429 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.429 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.429 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.429 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.429 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.429 { 00:20:59.429 "cntlid": 119, 00:20:59.429 "qid": 0, 00:20:59.429 "state": "enabled", 00:20:59.429 "thread": "nvmf_tgt_poll_group_000", 00:20:59.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:59.429 "listen_address": { 00:20:59.429 "trtype": "TCP", 00:20:59.429 "adrfam": "IPv4", 00:20:59.429 "traddr": "10.0.0.2", 00:20:59.429 "trsvcid": "4420" 00:20:59.429 }, 00:20:59.429 "peer_address": { 00:20:59.429 "trtype": "TCP", 00:20:59.429 "adrfam": "IPv4", 00:20:59.429 "traddr": "10.0.0.1", 00:20:59.430 "trsvcid": "42776" 00:20:59.430 }, 00:20:59.430 "auth": { 00:20:59.430 "state": "completed", 00:20:59.430 "digest": "sha512", 00:20:59.430 "dhgroup": "ffdhe3072" 00:20:59.430 } 00:20:59.430 } 00:20:59.430 ]' 00:20:59.430 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.430 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.430 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.430 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.430 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.430 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.430 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.430 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.997 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:20:59.997 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:21:01.905 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.905 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:01.905 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.905 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.905 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.905 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.905 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.905 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.905 18:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.843 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:02.843 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.843 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.843 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:02.843 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:02.843 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.843 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.843 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.843 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.843 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.843 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.843 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.843 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.102 00:21:03.102 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.102 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.102 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.670 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.670 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.670 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.670 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.670 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.670 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.670 { 00:21:03.670 "cntlid": 121, 00:21:03.670 "qid": 0, 00:21:03.670 "state": "enabled", 00:21:03.670 "thread": "nvmf_tgt_poll_group_000", 00:21:03.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:03.670 "listen_address": { 00:21:03.670 "trtype": "TCP", 00:21:03.670 "adrfam": "IPv4", 00:21:03.670 "traddr": "10.0.0.2", 00:21:03.670 "trsvcid": "4420" 00:21:03.670 }, 00:21:03.670 "peer_address": { 00:21:03.670 "trtype": "TCP", 00:21:03.670 "adrfam": "IPv4", 00:21:03.670 "traddr": "10.0.0.1", 00:21:03.670 "trsvcid": "42802" 00:21:03.670 }, 00:21:03.670 "auth": { 00:21:03.670 "state": "completed", 00:21:03.670 "digest": "sha512", 00:21:03.670 "dhgroup": "ffdhe4096" 00:21:03.670 } 00:21:03.670 } 00:21:03.670 ]' 00:21:03.670 18:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.670 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.671 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.671 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:03.671 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.671 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.671 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.671 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.606 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:21:04.606 18:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:21:06.515 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.515 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:06.515 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.515 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.515 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.515 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.515 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.515 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.777 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:06.777 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.777 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.777 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:06.777 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.777 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.777 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.777 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.777 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.777 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.777 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.777 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.777 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.346 00:21:07.346 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.346 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.346 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.914 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.914 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.914 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.914 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.914 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.914 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.914 { 00:21:07.914 "cntlid": 123, 00:21:07.914 "qid": 0, 00:21:07.914 "state": "enabled", 00:21:07.914 "thread": "nvmf_tgt_poll_group_000", 00:21:07.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:07.914 "listen_address": { 00:21:07.914 "trtype": "TCP", 00:21:07.914 "adrfam": "IPv4", 00:21:07.914 "traddr": "10.0.0.2", 00:21:07.914 "trsvcid": "4420" 00:21:07.914 }, 00:21:07.914 "peer_address": { 00:21:07.914 "trtype": "TCP", 00:21:07.914 "adrfam": "IPv4", 00:21:07.914 "traddr": "10.0.0.1", 00:21:07.914 "trsvcid": "49654" 00:21:07.914 }, 00:21:07.914 "auth": { 00:21:07.914 "state": "completed", 00:21:07.914 "digest": "sha512", 00:21:07.914 "dhgroup": "ffdhe4096" 00:21:07.914 } 00:21:07.914 } 00:21:07.914 ]' 00:21:07.914 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.914 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.914 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.914 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:07.915 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.915 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.915 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.915 18:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.851 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:21:08.851 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:21:10.757 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.757 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:10.757 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.757 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.757 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.757 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.757 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:10.757 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:11.325 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:11.325 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.325 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.325 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:11.325 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:11.325 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.325 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.325 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.325 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.325 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.325 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.325 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.325 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.893 00:21:11.893 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.893 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.893 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.461 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.461 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.461 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.461 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.461 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.461 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.461 { 00:21:12.461 "cntlid": 125, 00:21:12.461 "qid": 0, 00:21:12.461 "state": "enabled", 00:21:12.461 "thread": "nvmf_tgt_poll_group_000", 00:21:12.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:12.461 "listen_address": { 00:21:12.461 "trtype": "TCP", 00:21:12.461 "adrfam": "IPv4", 00:21:12.461 "traddr": "10.0.0.2", 00:21:12.461 "trsvcid": "4420" 00:21:12.461 }, 00:21:12.461 "peer_address": { 00:21:12.461 "trtype": "TCP", 00:21:12.461 "adrfam": "IPv4", 00:21:12.461 "traddr": "10.0.0.1", 00:21:12.461 "trsvcid": "49684" 00:21:12.461 }, 00:21:12.461 "auth": { 00:21:12.461 "state": "completed", 00:21:12.461 "digest": "sha512", 00:21:12.461 "dhgroup": "ffdhe4096" 00:21:12.461 } 00:21:12.461 } 00:21:12.461 ]' 00:21:12.461 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.461 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.461 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.461 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:12.461 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.720 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.720 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.720 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.287 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:21:13.287 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:21:15.190 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.190 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:15.190 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.190 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.190 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.190 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.190 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.190 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.521 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:15.521 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.521 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.521 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:15.521 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:15.521 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.521 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:15.521 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.521 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.521 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.521 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:15.521 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.521 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.093 00:21:16.093 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.093 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.093 18:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.661 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.661 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.661 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.661 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.661 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.661 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.661 { 00:21:16.661 "cntlid": 127, 00:21:16.661 "qid": 0, 00:21:16.661 "state": "enabled", 00:21:16.661 "thread": "nvmf_tgt_poll_group_000", 00:21:16.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:16.661 "listen_address": { 00:21:16.661 "trtype": "TCP", 00:21:16.661 "adrfam": "IPv4", 00:21:16.661 "traddr": "10.0.0.2", 00:21:16.661 "trsvcid": "4420" 00:21:16.661 }, 00:21:16.661 "peer_address": { 00:21:16.661 "trtype": "TCP", 00:21:16.661 "adrfam": "IPv4", 00:21:16.661 "traddr": "10.0.0.1", 00:21:16.661 "trsvcid": "49722" 00:21:16.661 }, 00:21:16.661 "auth": { 00:21:16.661 "state": "completed", 00:21:16.661 "digest": "sha512", 00:21:16.661 "dhgroup": "ffdhe4096" 00:21:16.661 } 00:21:16.661 } 00:21:16.661 ]' 00:21:16.661 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.661 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.661 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.920 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.920 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.920 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.920 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.920 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.179 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:21:17.179 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:21:19.111 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.111 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:19.111 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.111 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.111 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.111 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.111 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.111 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:19.111 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:20.050 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:20.050 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.050 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.050 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:20.050 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:20.050 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.050 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.050 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.050 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.050 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.050 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.050 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.050 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.988 00:21:20.988 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.989 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.989 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.559 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.559 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.559 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.559 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.559 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.559 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.559 { 00:21:21.559 "cntlid": 129, 00:21:21.559 "qid": 0, 00:21:21.559 "state": "enabled", 00:21:21.559 "thread": "nvmf_tgt_poll_group_000", 00:21:21.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:21.559 "listen_address": { 00:21:21.559 "trtype": "TCP", 00:21:21.559 "adrfam": "IPv4", 00:21:21.559 "traddr": "10.0.0.2", 00:21:21.559 "trsvcid": "4420" 00:21:21.559 }, 00:21:21.559 "peer_address": { 00:21:21.559 "trtype": "TCP", 00:21:21.559 "adrfam": "IPv4", 00:21:21.559 "traddr": "10.0.0.1", 00:21:21.559 "trsvcid": "48586" 00:21:21.559 }, 00:21:21.559 "auth": { 00:21:21.559 "state": "completed", 00:21:21.559 "digest": "sha512", 00:21:21.559 "dhgroup": "ffdhe6144" 00:21:21.559 } 00:21:21.559 } 00:21:21.559 ]' 00:21:21.559 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.559 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.559 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.559 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.559 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.559 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.559 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.559 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.126 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:21:22.126 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.036 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.037 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.976 00:21:24.976 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.976 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.976 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.542 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.542 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.542 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.542 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.542 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.542 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.542 { 00:21:25.542 "cntlid": 131, 00:21:25.542 "qid": 0, 00:21:25.542 "state": "enabled", 00:21:25.542 "thread": "nvmf_tgt_poll_group_000", 00:21:25.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:25.542 "listen_address": { 00:21:25.542 "trtype": "TCP", 00:21:25.542 "adrfam": "IPv4", 00:21:25.542 "traddr": "10.0.0.2", 00:21:25.542 "trsvcid": "4420" 00:21:25.542 }, 00:21:25.542 "peer_address": { 00:21:25.542 "trtype": "TCP", 00:21:25.542 "adrfam": "IPv4", 00:21:25.542 "traddr": "10.0.0.1", 00:21:25.542 "trsvcid": "48616" 00:21:25.542 }, 00:21:25.542 "auth": { 00:21:25.542 "state": "completed", 00:21:25.542 "digest": "sha512", 00:21:25.542 "dhgroup": "ffdhe6144" 00:21:25.542 } 00:21:25.542 } 00:21:25.542 ]' 00:21:25.542 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.542 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.542 18:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.542 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.542 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.542 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.542 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.542 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.109 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:21:26.109 18:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.020 18:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.398 00:21:29.398 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.398 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.398 18:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.968 { 00:21:29.968 "cntlid": 133, 00:21:29.968 "qid": 0, 00:21:29.968 "state": "enabled", 00:21:29.968 "thread": "nvmf_tgt_poll_group_000", 00:21:29.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:29.968 "listen_address": { 00:21:29.968 "trtype": "TCP", 00:21:29.968 "adrfam": "IPv4", 00:21:29.968 "traddr": "10.0.0.2", 00:21:29.968 "trsvcid": "4420" 00:21:29.968 }, 00:21:29.968 "peer_address": { 00:21:29.968 "trtype": "TCP", 00:21:29.968 "adrfam": "IPv4", 00:21:29.968 "traddr": "10.0.0.1", 00:21:29.968 "trsvcid": "40256" 00:21:29.968 }, 00:21:29.968 "auth": { 00:21:29.968 "state": "completed", 00:21:29.968 "digest": "sha512", 00:21:29.968 "dhgroup": "ffdhe6144" 00:21:29.968 } 00:21:29.968 } 00:21:29.968 ]' 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.968 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.536 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:21:30.536 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:21:32.444 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.444 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:32.444 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.444 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.444 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.444 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.444 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:32.444 18:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:32.703 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:32.703 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.703 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.703 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:32.703 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.703 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.703 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:32.703 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.703 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.703 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.703 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.703 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.703 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.641 00:21:33.641 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.641 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.641 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.901 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.901 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.901 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.901 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.160 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.160 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.160 { 00:21:34.160 "cntlid": 135, 00:21:34.160 "qid": 0, 00:21:34.160 "state": "enabled", 00:21:34.160 "thread": "nvmf_tgt_poll_group_000", 00:21:34.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:34.160 "listen_address": { 00:21:34.160 "trtype": "TCP", 00:21:34.160 "adrfam": "IPv4", 00:21:34.160 "traddr": "10.0.0.2", 00:21:34.160 "trsvcid": "4420" 00:21:34.160 }, 00:21:34.160 "peer_address": { 00:21:34.160 "trtype": "TCP", 00:21:34.160 "adrfam": "IPv4", 00:21:34.160 "traddr": "10.0.0.1", 00:21:34.160 "trsvcid": "40288" 00:21:34.160 }, 00:21:34.160 "auth": { 00:21:34.160 "state": "completed", 00:21:34.160 "digest": "sha512", 00:21:34.160 "dhgroup": "ffdhe6144" 00:21:34.160 } 00:21:34.160 } 00:21:34.160 ]' 00:21:34.160 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.160 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.160 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.160 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:34.160 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.160 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.160 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.160 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.096 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:21:35.096 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:21:36.998 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.998 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:36.998 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.998 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.998 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.998 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.998 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.998 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.998 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:37.257 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:37.257 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.257 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.257 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:37.257 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:37.257 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.257 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.257 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.257 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.257 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.257 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.257 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.257 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.172 00:21:39.172 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.172 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.172 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.739 { 00:21:39.739 "cntlid": 137, 00:21:39.739 "qid": 0, 00:21:39.739 "state": "enabled", 00:21:39.739 "thread": "nvmf_tgt_poll_group_000", 00:21:39.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:39.739 "listen_address": { 00:21:39.739 "trtype": "TCP", 00:21:39.739 "adrfam": "IPv4", 00:21:39.739 "traddr": "10.0.0.2", 00:21:39.739 "trsvcid": "4420" 00:21:39.739 }, 00:21:39.739 "peer_address": { 00:21:39.739 "trtype": "TCP", 00:21:39.739 "adrfam": "IPv4", 00:21:39.739 "traddr": "10.0.0.1", 00:21:39.739 "trsvcid": "60462" 00:21:39.739 }, 00:21:39.739 "auth": { 00:21:39.739 "state": "completed", 00:21:39.739 "digest": "sha512", 00:21:39.739 "dhgroup": "ffdhe8192" 00:21:39.739 } 00:21:39.739 } 00:21:39.739 ]' 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.739 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.306 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:21:40.306 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:21:41.687 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.687 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:41.687 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.687 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.687 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.687 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.687 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.687 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.254 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:42.254 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.254 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.254 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:42.254 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.254 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.254 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.254 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.254 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.254 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.254 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.254 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.254 18:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.634 00:21:43.634 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.634 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.634 18:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.893 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.893 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.893 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.893 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.893 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.893 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.893 { 00:21:43.893 "cntlid": 139, 00:21:43.893 "qid": 0, 00:21:43.893 "state": "enabled", 00:21:43.893 "thread": "nvmf_tgt_poll_group_000", 00:21:43.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:43.893 "listen_address": { 00:21:43.893 "trtype": "TCP", 00:21:43.893 "adrfam": "IPv4", 00:21:43.893 "traddr": "10.0.0.2", 00:21:43.893 "trsvcid": "4420" 00:21:43.893 }, 00:21:43.893 "peer_address": { 00:21:43.893 "trtype": "TCP", 00:21:43.893 "adrfam": "IPv4", 00:21:43.893 "traddr": "10.0.0.1", 00:21:43.893 "trsvcid": "60492" 00:21:43.893 }, 00:21:43.893 "auth": { 00:21:43.893 "state": "completed", 00:21:43.893 "digest": "sha512", 00:21:43.893 "dhgroup": "ffdhe8192" 00:21:43.893 } 00:21:43.893 } 00:21:43.893 ]' 00:21:43.893 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.893 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.893 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.893 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:43.893 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.154 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.154 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.154 18:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.722 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:21:44.722 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: --dhchap-ctrl-secret DHHC-1:02:ZWRlMThkOGE1MmU3NDc0ODNhNGI0NzRhZjEyY2UwYWQ3M2MzMmI5Mjc5MDZiMGY1VEdBOQ==: 00:21:46.626 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.626 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:46.626 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.626 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.626 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.626 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.626 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:46.626 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:47.565 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:47.565 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.565 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.565 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:47.565 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:47.565 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.565 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.565 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.565 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.565 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.565 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.565 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.565 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.943 00:21:48.943 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.943 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.943 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.201 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.201 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.201 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.201 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.201 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.201 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.201 { 00:21:49.201 "cntlid": 141, 00:21:49.201 "qid": 0, 00:21:49.201 "state": "enabled", 00:21:49.201 "thread": "nvmf_tgt_poll_group_000", 00:21:49.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:49.201 "listen_address": { 00:21:49.201 "trtype": "TCP", 00:21:49.201 "adrfam": "IPv4", 00:21:49.201 "traddr": "10.0.0.2", 00:21:49.201 "trsvcid": "4420" 00:21:49.201 }, 00:21:49.201 "peer_address": { 00:21:49.201 "trtype": "TCP", 00:21:49.201 "adrfam": "IPv4", 00:21:49.201 "traddr": "10.0.0.1", 00:21:49.201 "trsvcid": "39770" 00:21:49.201 }, 00:21:49.201 "auth": { 00:21:49.201 "state": "completed", 00:21:49.201 "digest": "sha512", 00:21:49.201 "dhgroup": "ffdhe8192" 00:21:49.201 } 00:21:49.201 } 00:21:49.201 ]' 00:21:49.201 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.460 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.460 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.460 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.460 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.460 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.460 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.460 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.718 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:21:49.718 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:01:ZGY3N2UwNGM0ZTg0ZTljYjM1YmRmOTYwNjg0N2NjNTOjL2kW: 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.253 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.156 00:21:54.156 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.156 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.157 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.415 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.415 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.416 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.416 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.416 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.416 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.416 { 00:21:54.416 "cntlid": 143, 00:21:54.416 "qid": 0, 00:21:54.416 "state": "enabled", 00:21:54.416 "thread": "nvmf_tgt_poll_group_000", 00:21:54.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:21:54.416 "listen_address": { 00:21:54.416 "trtype": "TCP", 00:21:54.416 "adrfam": "IPv4", 00:21:54.416 "traddr": "10.0.0.2", 00:21:54.416 "trsvcid": "4420" 00:21:54.416 }, 00:21:54.416 "peer_address": { 00:21:54.416 "trtype": "TCP", 00:21:54.416 "adrfam": "IPv4", 00:21:54.416 "traddr": "10.0.0.1", 00:21:54.416 "trsvcid": "39794" 00:21:54.416 }, 00:21:54.416 "auth": { 00:21:54.416 "state": "completed", 00:21:54.416 "digest": "sha512", 00:21:54.416 "dhgroup": "ffdhe8192" 00:21:54.416 } 00:21:54.416 } 00:21:54.416 ]' 00:21:54.416 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.416 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.416 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.416 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.416 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.416 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.416 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.416 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.983 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:21:54.983 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:21:56.886 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.886 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:56.886 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.887 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.146 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.146 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:57.146 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:57.146 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:57.146 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:57.146 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:57.146 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:57.714 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:57.714 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.714 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.714 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:57.714 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:57.714 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.714 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.714 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.714 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.714 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.714 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.714 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.714 18:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.616 00:21:59.616 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.616 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.616 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.194 { 00:22:00.194 "cntlid": 145, 00:22:00.194 "qid": 0, 00:22:00.194 "state": "enabled", 00:22:00.194 "thread": "nvmf_tgt_poll_group_000", 00:22:00.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:00.194 "listen_address": { 00:22:00.194 "trtype": "TCP", 00:22:00.194 "adrfam": "IPv4", 00:22:00.194 "traddr": "10.0.0.2", 00:22:00.194 "trsvcid": "4420" 00:22:00.194 }, 00:22:00.194 "peer_address": { 00:22:00.194 "trtype": "TCP", 00:22:00.194 "adrfam": "IPv4", 00:22:00.194 "traddr": "10.0.0.1", 00:22:00.194 "trsvcid": "40394" 00:22:00.194 }, 00:22:00.194 "auth": { 00:22:00.194 "state": "completed", 00:22:00.194 "digest": "sha512", 00:22:00.194 "dhgroup": "ffdhe8192" 00:22:00.194 } 00:22:00.194 } 00:22:00.194 ]' 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.194 18:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.765 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:22:00.765 18:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:00:OTZkOTdlMjcyNDQ2ZGRiYzdjZmQwZWFmOTNjNDEwODg2MTYzNzE5MTk0ODdlNDYxV+mirQ==: --dhchap-ctrl-secret DHHC-1:03:YjVjNWYxY2FiMmM2YTRhZjNkNzcyN2EyNjIxYmJlNDU3OTdhMjBhNGMzNWNmOTBmY2E5NmVmNjVhYmQ4NjRjNBNaOBo=: 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:02.674 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:04.055 request: 00:22:04.055 { 00:22:04.055 "name": "nvme0", 00:22:04.055 "trtype": "tcp", 00:22:04.055 "traddr": "10.0.0.2", 00:22:04.055 "adrfam": "ipv4", 00:22:04.055 "trsvcid": "4420", 00:22:04.055 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:04.055 "prchk_reftag": false, 00:22:04.055 "prchk_guard": false, 00:22:04.055 "hdgst": false, 00:22:04.055 "ddgst": false, 00:22:04.055 "dhchap_key": "key2", 00:22:04.055 "allow_unrecognized_csi": false, 00:22:04.055 "method": "bdev_nvme_attach_controller", 00:22:04.055 "req_id": 1 00:22:04.055 } 00:22:04.055 Got JSON-RPC error response 00:22:04.055 response: 00:22:04.055 { 00:22:04.055 "code": -5, 00:22:04.055 "message": "Input/output error" 00:22:04.055 } 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:04.055 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.544 request: 00:22:05.544 { 00:22:05.544 "name": "nvme0", 00:22:05.544 "trtype": "tcp", 00:22:05.544 "traddr": "10.0.0.2", 00:22:05.544 "adrfam": "ipv4", 00:22:05.544 "trsvcid": "4420", 00:22:05.544 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:05.544 "prchk_reftag": false, 00:22:05.544 "prchk_guard": false, 00:22:05.544 "hdgst": false, 00:22:05.544 "ddgst": false, 00:22:05.544 "dhchap_key": "key1", 00:22:05.544 "dhchap_ctrlr_key": "ckey2", 00:22:05.544 "allow_unrecognized_csi": false, 00:22:05.544 "method": "bdev_nvme_attach_controller", 00:22:05.544 "req_id": 1 00:22:05.544 } 00:22:05.544 Got JSON-RPC error response 00:22:05.544 response: 00:22:05.544 { 00:22:05.544 "code": -5, 00:22:05.544 "message": "Input/output error" 00:22:05.544 } 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.544 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.973 request: 00:22:06.973 { 00:22:06.973 "name": "nvme0", 00:22:06.973 "trtype": "tcp", 00:22:06.973 "traddr": "10.0.0.2", 00:22:06.973 "adrfam": "ipv4", 00:22:06.973 "trsvcid": "4420", 00:22:06.973 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:06.973 "prchk_reftag": false, 00:22:06.973 "prchk_guard": false, 00:22:06.973 "hdgst": false, 00:22:06.973 "ddgst": false, 00:22:06.973 "dhchap_key": "key1", 00:22:06.973 "dhchap_ctrlr_key": "ckey1", 00:22:06.973 "allow_unrecognized_csi": false, 00:22:06.973 "method": "bdev_nvme_attach_controller", 00:22:06.973 "req_id": 1 00:22:06.973 } 00:22:06.973 Got JSON-RPC error response 00:22:06.973 response: 00:22:06.973 { 00:22:06.973 "code": -5, 00:22:06.973 "message": "Input/output error" 00:22:06.973 } 00:22:06.973 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:06.973 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.973 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.973 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.973 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:06.973 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.973 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.973 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.974 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1186094 00:22:06.974 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1186094 ']' 00:22:06.974 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1186094 00:22:06.974 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:06.974 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.974 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1186094 00:22:06.974 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:06.974 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:06.974 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1186094' 00:22:06.974 killing process with pid 1186094 00:22:06.974 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1186094 00:22:06.974 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1186094 00:22:07.232 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:07.232 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:07.232 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:07.232 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.232 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1224613 00:22:07.232 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:07.232 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1224613 00:22:07.232 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1224613 ']' 00:22:07.232 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.232 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:07.232 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.232 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:07.232 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1224613 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1224613 ']' 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.137 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.395 null0 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nln 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.0i2 ]] 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0i2 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dhl 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.v2g ]] 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.v2g 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.7n0 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.NQ9 ]] 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NQ9 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lW9 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.395 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.931 nvme0n1 00:22:11.931 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.931 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.931 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.498 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.498 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.498 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.498 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.498 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.498 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.498 { 00:22:12.498 "cntlid": 1, 00:22:12.498 "qid": 0, 00:22:12.498 "state": "enabled", 00:22:12.498 "thread": "nvmf_tgt_poll_group_000", 00:22:12.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:12.498 "listen_address": { 00:22:12.498 "trtype": "TCP", 00:22:12.498 "adrfam": "IPv4", 00:22:12.498 "traddr": "10.0.0.2", 00:22:12.498 "trsvcid": "4420" 00:22:12.498 }, 00:22:12.498 "peer_address": { 00:22:12.498 "trtype": "TCP", 00:22:12.498 "adrfam": "IPv4", 00:22:12.498 "traddr": "10.0.0.1", 00:22:12.498 "trsvcid": "33596" 00:22:12.498 }, 00:22:12.498 "auth": { 00:22:12.498 "state": "completed", 00:22:12.498 "digest": "sha512", 00:22:12.498 "dhgroup": "ffdhe8192" 00:22:12.498 } 00:22:12.498 } 00:22:12.498 ]' 00:22:12.498 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.757 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.757 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.757 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.757 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.757 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.757 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.757 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.324 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:22:13.324 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:22:15.231 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.231 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:15.231 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.231 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.231 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.231 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:22:15.231 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.231 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.231 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.231 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:15.231 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:15.800 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:15.800 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:15.800 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:15.800 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:15.800 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:15.800 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:15.800 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:15.800 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:15.800 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.800 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.059 request: 00:22:16.059 { 00:22:16.059 "name": "nvme0", 00:22:16.059 "trtype": "tcp", 00:22:16.059 "traddr": "10.0.0.2", 00:22:16.059 "adrfam": "ipv4", 00:22:16.059 "trsvcid": "4420", 00:22:16.059 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:16.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:16.059 "prchk_reftag": false, 00:22:16.059 "prchk_guard": false, 00:22:16.059 "hdgst": false, 00:22:16.059 "ddgst": false, 00:22:16.060 "dhchap_key": "key3", 00:22:16.060 "allow_unrecognized_csi": false, 00:22:16.060 "method": "bdev_nvme_attach_controller", 00:22:16.060 "req_id": 1 00:22:16.060 } 00:22:16.060 Got JSON-RPC error response 00:22:16.060 response: 00:22:16.060 { 00:22:16.060 "code": -5, 00:22:16.060 "message": "Input/output error" 00:22:16.060 } 00:22:16.060 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:16.060 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:16.060 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:16.060 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:16.060 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:16.060 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:16.060 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:16.060 18:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:16.629 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:16.629 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:16.629 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:16.629 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:16.629 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.629 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:16.629 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:16.629 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:16.629 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.629 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.569 request: 00:22:17.569 { 00:22:17.569 "name": "nvme0", 00:22:17.569 "trtype": "tcp", 00:22:17.569 "traddr": "10.0.0.2", 00:22:17.569 "adrfam": "ipv4", 00:22:17.569 "trsvcid": "4420", 00:22:17.569 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:17.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:17.569 "prchk_reftag": false, 00:22:17.569 "prchk_guard": false, 00:22:17.569 "hdgst": false, 00:22:17.569 "ddgst": false, 00:22:17.569 "dhchap_key": "key3", 00:22:17.569 "allow_unrecognized_csi": false, 00:22:17.569 "method": "bdev_nvme_attach_controller", 00:22:17.569 "req_id": 1 00:22:17.569 } 00:22:17.569 Got JSON-RPC error response 00:22:17.569 response: 00:22:17.569 { 00:22:17.569 "code": -5, 00:22:17.569 "message": "Input/output error" 00:22:17.569 } 00:22:17.569 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:17.569 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:17.569 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:17.569 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:17.569 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:17.569 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:17.569 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:17.569 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:17.569 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:17.569 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.139 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:19.079 request: 00:22:19.079 { 00:22:19.079 "name": "nvme0", 00:22:19.079 "trtype": "tcp", 00:22:19.079 "traddr": "10.0.0.2", 00:22:19.079 "adrfam": "ipv4", 00:22:19.079 "trsvcid": "4420", 00:22:19.079 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:19.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:19.079 "prchk_reftag": false, 00:22:19.079 "prchk_guard": false, 00:22:19.079 "hdgst": false, 00:22:19.079 "ddgst": false, 00:22:19.079 "dhchap_key": "key0", 00:22:19.079 "dhchap_ctrlr_key": "key1", 00:22:19.079 "allow_unrecognized_csi": false, 00:22:19.079 "method": "bdev_nvme_attach_controller", 00:22:19.079 "req_id": 1 00:22:19.079 } 00:22:19.079 Got JSON-RPC error response 00:22:19.079 response: 00:22:19.079 { 00:22:19.079 "code": -5, 00:22:19.079 "message": "Input/output error" 00:22:19.079 } 00:22:19.079 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:19.079 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.079 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.079 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.079 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:19.079 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:19.079 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:19.648 nvme0n1 00:22:19.648 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:19.648 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:19.648 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.218 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.218 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.218 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.478 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:22:20.478 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.478 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.478 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.478 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:20.478 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:20.478 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:23.772 nvme0n1 00:22:23.772 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:23.772 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:23.772 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.772 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.772 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:23.772 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.772 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.772 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.772 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:23.772 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:23.772 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.340 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.340 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:22:24.340 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 -l 0 --dhchap-secret DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: --dhchap-ctrl-secret DHHC-1:03:NGYyNzI1ZGQzYjNkNWI4ZDc3MTUyOGUxY2JmYWI5ZjBjNWViNDE5NTIyYjE5NTk0Y2VlMjExM2ExMjVmZWQ1ZCxH6WQ=: 00:22:25.716 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:25.716 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:25.716 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:25.716 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:25.716 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:25.716 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:25.716 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:25.716 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.716 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.975 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:25.975 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:25.975 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:25.975 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:25.975 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.975 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:25.975 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:25.975 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:25.975 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:25.975 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:27.354 request: 00:22:27.354 { 00:22:27.354 "name": "nvme0", 00:22:27.354 "trtype": "tcp", 00:22:27.354 "traddr": "10.0.0.2", 00:22:27.354 "adrfam": "ipv4", 00:22:27.354 "trsvcid": "4420", 00:22:27.354 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:27.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:22:27.354 "prchk_reftag": false, 00:22:27.354 "prchk_guard": false, 00:22:27.354 "hdgst": false, 00:22:27.354 "ddgst": false, 00:22:27.354 "dhchap_key": "key1", 00:22:27.354 "allow_unrecognized_csi": false, 00:22:27.354 "method": "bdev_nvme_attach_controller", 00:22:27.354 "req_id": 1 00:22:27.354 } 00:22:27.354 Got JSON-RPC error response 00:22:27.354 response: 00:22:27.354 { 00:22:27.354 "code": -5, 00:22:27.354 "message": "Input/output error" 00:22:27.354 } 00:22:27.354 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:27.354 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.354 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.354 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.354 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:27.354 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:27.354 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.890 nvme0n1 00:22:29.890 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:29.890 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:29.890 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.149 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.149 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.149 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.409 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:30.409 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.409 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.409 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.409 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:30.409 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:30.409 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:30.977 nvme0n1 00:22:30.977 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:30.977 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.977 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:31.546 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.546 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.546 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: '' 2s 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: ]] 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTQ2NjdmNmNiNGM1ZmMzMmZkNjQwNDc5Njk0ZjdhYTJcdtvw: 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:31.806 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: 2s 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: ]] 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWU2YjgxN2NhNDJkYjAxNTI5ZTk0ODBkOTAwNjRlMzFkMmI3OTA2MjM1MjBiMzM3qySyQg==: 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:33.711 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:36.248 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:38.186 nvme0n1 00:22:38.186 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:38.186 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.186 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.186 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.186 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:38.186 18:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:39.563 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:39.563 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:39.563 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.822 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.822 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:39.822 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.822 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.822 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.822 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:39.822 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:40.391 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:40.391 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.391 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:40.651 18:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:42.555 request: 00:22:42.555 { 00:22:42.555 "name": "nvme0", 00:22:42.555 "dhchap_key": "key1", 00:22:42.555 "dhchap_ctrlr_key": "key3", 00:22:42.555 "method": "bdev_nvme_set_keys", 00:22:42.555 "req_id": 1 00:22:42.555 } 00:22:42.555 Got JSON-RPC error response 00:22:42.555 response: 00:22:42.555 { 00:22:42.555 "code": -13, 00:22:42.555 "message": "Permission denied" 00:22:42.555 } 00:22:42.555 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:42.555 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:42.555 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:42.555 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:42.555 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:42.555 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:42.555 18:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.124 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:43.124 18:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:44.063 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:44.063 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:44.063 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.632 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:44.632 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:44.632 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.632 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.632 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.632 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:44.632 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:44.632 18:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:47.923 nvme0n1 00:22:47.923 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:47.923 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.923 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.923 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.923 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:47.923 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:47.923 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:47.923 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:47.923 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.923 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:47.923 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.923 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:47.923 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:49.302 request: 00:22:49.302 { 00:22:49.302 "name": "nvme0", 00:22:49.302 "dhchap_key": "key2", 00:22:49.302 "dhchap_ctrlr_key": "key0", 00:22:49.302 "method": "bdev_nvme_set_keys", 00:22:49.302 "req_id": 1 00:22:49.302 } 00:22:49.302 Got JSON-RPC error response 00:22:49.302 response: 00:22:49.302 { 00:22:49.302 "code": -13, 00:22:49.302 "message": "Permission denied" 00:22:49.302 } 00:22:49.302 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:49.302 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:49.302 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:49.302 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:49.302 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:49.302 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:49.302 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.869 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:49.869 18:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:50.804 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:50.804 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:50.804 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1186231 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1186231 ']' 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1186231 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1186231 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1186231' 00:22:51.062 killing process with pid 1186231 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1186231 00:22:51.062 18:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1186231 00:22:52.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:52.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:52.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:52.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:52.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:52.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:52.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:52.002 rmmod nvme_tcp 00:22:52.002 rmmod nvme_fabrics 00:22:52.002 rmmod nvme_keyring 00:22:52.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:52.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:52.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:52.002 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1224613 ']' 00:22:52.003 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1224613 00:22:52.003 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1224613 ']' 00:22:52.003 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1224613 00:22:52.003 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:52.003 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:52.003 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1224613 00:22:52.003 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:52.003 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:52.003 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1224613' 00:22:52.003 killing process with pid 1224613 00:22:52.003 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1224613 00:22:52.003 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1224613 00:22:52.262 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:52.262 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:52.262 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:52.262 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:52.262 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:22:52.263 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:52.263 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:22:52.263 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:52.263 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:52.263 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.263 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.263 18:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.802 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:54.802 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.nln /tmp/spdk.key-sha256.dhl /tmp/spdk.key-sha384.7n0 /tmp/spdk.key-sha512.lW9 /tmp/spdk.key-sha512.0i2 /tmp/spdk.key-sha384.v2g /tmp/spdk.key-sha256.NQ9 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:54.802 00:22:54.802 real 6m22.056s 00:22:54.802 user 14m55.732s 00:22:54.802 sys 0m42.460s 00:22:54.802 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:54.802 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.802 ************************************ 00:22:54.802 END TEST nvmf_auth_target 00:22:54.802 ************************************ 00:22:54.802 18:33:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:54.802 18:33:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:54.802 18:33:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:54.802 18:33:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:54.802 18:33:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:54.802 ************************************ 00:22:54.802 START TEST nvmf_bdevio_no_huge 00:22:54.802 ************************************ 00:22:54.802 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:54.802 * Looking for test storage... 00:22:54.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:54.802 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:54.802 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:22:54.802 18:33:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:54.802 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:54.802 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.802 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.802 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.802 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.802 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.802 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.802 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.802 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.802 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:54.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.803 --rc genhtml_branch_coverage=1 00:22:54.803 --rc genhtml_function_coverage=1 00:22:54.803 --rc genhtml_legend=1 00:22:54.803 --rc geninfo_all_blocks=1 00:22:54.803 --rc geninfo_unexecuted_blocks=1 00:22:54.803 00:22:54.803 ' 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:54.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.803 --rc genhtml_branch_coverage=1 00:22:54.803 --rc genhtml_function_coverage=1 00:22:54.803 --rc genhtml_legend=1 00:22:54.803 --rc geninfo_all_blocks=1 00:22:54.803 --rc geninfo_unexecuted_blocks=1 00:22:54.803 00:22:54.803 ' 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:54.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.803 --rc genhtml_branch_coverage=1 00:22:54.803 --rc genhtml_function_coverage=1 00:22:54.803 --rc genhtml_legend=1 00:22:54.803 --rc geninfo_all_blocks=1 00:22:54.803 --rc geninfo_unexecuted_blocks=1 00:22:54.803 00:22:54.803 ' 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:54.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.803 --rc genhtml_branch_coverage=1 00:22:54.803 --rc genhtml_function_coverage=1 00:22:54.803 --rc genhtml_legend=1 00:22:54.803 --rc geninfo_all_blocks=1 00:22:54.803 --rc geninfo_unexecuted_blocks=1 00:22:54.803 00:22:54.803 ' 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:54.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:54.803 18:33:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:58.090 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:58.090 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:58.090 Found net devices under 0000:84:00.0: cvl_0_0 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.090 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:58.090 Found net devices under 0000:84:00.1: cvl_0_1 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:58.091 18:33:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:58.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:22:58.091 00:22:58.091 --- 10.0.0.2 ping statistics --- 00:22:58.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.091 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:22:58.091 00:22:58.091 --- 10.0.0.1 ping statistics --- 00:22:58.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.091 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1232334 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1232334 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1232334 ']' 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.091 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.091 [2024-10-08 18:33:26.274136] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:22:58.091 [2024-10-08 18:33:26.274256] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:58.091 [2024-10-08 18:33:26.374386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:58.091 [2024-10-08 18:33:26.503843] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.091 [2024-10-08 18:33:26.503906] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.091 [2024-10-08 18:33:26.503923] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.091 [2024-10-08 18:33:26.503937] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.091 [2024-10-08 18:33:26.503959] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.091 [2024-10-08 18:33:26.505236] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:22:58.091 [2024-10-08 18:33:26.505306] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:22:58.091 [2024-10-08 18:33:26.505346] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:22:58.091 [2024-10-08 18:33:26.505349] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.350 [2024-10-08 18:33:26.669996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.350 Malloc0 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:58.350 [2024-10-08 18:33:26.708843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:58.350 { 00:22:58.350 "params": { 00:22:58.350 "name": "Nvme$subsystem", 00:22:58.350 "trtype": "$TEST_TRANSPORT", 00:22:58.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.350 "adrfam": "ipv4", 00:22:58.350 "trsvcid": "$NVMF_PORT", 00:22:58.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.350 "hdgst": ${hdgst:-false}, 00:22:58.350 "ddgst": ${ddgst:-false} 00:22:58.350 }, 00:22:58.350 "method": "bdev_nvme_attach_controller" 00:22:58.350 } 00:22:58.350 EOF 00:22:58.350 )") 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:22:58.350 18:33:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:58.350 "params": { 00:22:58.350 "name": "Nvme1", 00:22:58.350 "trtype": "tcp", 00:22:58.350 "traddr": "10.0.0.2", 00:22:58.350 "adrfam": "ipv4", 00:22:58.350 "trsvcid": "4420", 00:22:58.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.350 "hdgst": false, 00:22:58.350 "ddgst": false 00:22:58.350 }, 00:22:58.350 "method": "bdev_nvme_attach_controller" 00:22:58.350 }' 00:22:58.350 [2024-10-08 18:33:26.774833] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:22:58.350 [2024-10-08 18:33:26.774925] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1232477 ] 00:22:58.350 [2024-10-08 18:33:26.848735] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:58.608 [2024-10-08 18:33:26.976766] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.608 [2024-10-08 18:33:26.976821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.608 [2024-10-08 18:33:26.976825] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.866 I/O targets: 00:22:58.866 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:58.866 00:22:58.866 00:22:58.866 CUnit - A unit testing framework for C - Version 2.1-3 00:22:58.866 http://cunit.sourceforge.net/ 00:22:58.866 00:22:58.866 00:22:58.866 Suite: bdevio tests on: Nvme1n1 00:22:58.866 Test: blockdev write read block ...passed 00:22:59.123 Test: blockdev write zeroes read block ...passed 00:22:59.123 Test: blockdev write zeroes read no split ...passed 00:22:59.123 Test: blockdev write zeroes read split ...passed 00:22:59.123 Test: blockdev write zeroes read split partial ...passed 00:22:59.123 Test: blockdev reset ...[2024-10-08 18:33:27.465926] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.123 [2024-10-08 18:33:27.466040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2556d40 (9): Bad file descriptor 00:22:59.123 [2024-10-08 18:33:27.563631] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:59.123 passed 00:22:59.123 Test: blockdev write read 8 blocks ...passed 00:22:59.123 Test: blockdev write read size > 128k ...passed 00:22:59.123 Test: blockdev write read invalid size ...passed 00:22:59.123 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:59.123 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:59.123 Test: blockdev write read max offset ...passed 00:22:59.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:59.381 Test: blockdev writev readv 8 blocks ...passed 00:22:59.381 Test: blockdev writev readv 30 x 1block ...passed 00:22:59.381 Test: blockdev writev readv block ...passed 00:22:59.381 Test: blockdev writev readv size > 128k ...passed 00:22:59.382 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:59.382 Test: blockdev comparev and writev ...[2024-10-08 18:33:27.857673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.382 [2024-10-08 18:33:27.857709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.382 [2024-10-08 18:33:27.857734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.382 [2024-10-08 18:33:27.857752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:59.382 [2024-10-08 18:33:27.858218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.382 [2024-10-08 18:33:27.858243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:59.382 [2024-10-08 18:33:27.858267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.382 [2024-10-08 18:33:27.858283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:59.382 [2024-10-08 18:33:27.858738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.382 [2024-10-08 18:33:27.858763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:59.382 [2024-10-08 18:33:27.858791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.382 [2024-10-08 18:33:27.858808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:59.382 [2024-10-08 18:33:27.859231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.382 [2024-10-08 18:33:27.859255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:59.382 [2024-10-08 18:33:27.859276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:59.382 [2024-10-08 18:33:27.859292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:59.382 passed 00:22:59.640 Test: blockdev nvme passthru rw ...passed 00:22:59.640 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:33:27.940987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:59.640 [2024-10-08 18:33:27.941015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:59.640 [2024-10-08 18:33:27.941170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:59.640 [2024-10-08 18:33:27.941193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:59.640 [2024-10-08 18:33:27.941340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:59.640 [2024-10-08 18:33:27.941363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:59.640 [2024-10-08 18:33:27.941506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:59.640 [2024-10-08 18:33:27.941529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:59.640 passed 00:22:59.640 Test: blockdev nvme admin passthru ...passed 00:22:59.640 Test: blockdev copy ...passed 00:22:59.640 00:22:59.640 Run Summary: Type Total Ran Passed Failed Inactive 00:22:59.640 suites 1 1 n/a 0 0 00:22:59.640 tests 23 23 23 0 0 00:22:59.640 asserts 152 152 152 0 n/a 00:22:59.640 00:22:59.640 Elapsed time = 1.307 seconds 00:22:59.898 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:59.898 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.898 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:59.898 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.898 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:59.898 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:59.898 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:59.898 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:59.898 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:59.898 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:59.898 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:59.898 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:59.898 rmmod nvme_tcp 00:23:00.156 rmmod nvme_fabrics 00:23:00.156 rmmod nvme_keyring 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1232334 ']' 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1232334 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1232334 ']' 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1232334 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1232334 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1232334' 00:23:00.156 killing process with pid 1232334 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1232334 00:23:00.156 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1232334 00:23:00.723 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:00.723 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:00.723 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:00.723 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:00.723 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:23:00.723 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:00.723 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:23:00.723 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:00.723 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:00.723 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.723 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.723 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.631 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:02.631 00:23:02.631 real 0m8.212s 00:23:02.631 user 0m13.412s 00:23:02.631 sys 0m3.658s 00:23:02.631 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:02.631 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:02.631 ************************************ 00:23:02.631 END TEST nvmf_bdevio_no_huge 00:23:02.631 ************************************ 00:23:02.631 18:33:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:02.631 18:33:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:02.631 18:33:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:02.631 18:33:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:02.631 ************************************ 00:23:02.631 START TEST nvmf_tls 00:23:02.631 ************************************ 00:23:02.631 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:02.890 * Looking for test storage... 00:23:02.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:02.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.890 --rc genhtml_branch_coverage=1 00:23:02.890 --rc genhtml_function_coverage=1 00:23:02.890 --rc genhtml_legend=1 00:23:02.890 --rc geninfo_all_blocks=1 00:23:02.890 --rc geninfo_unexecuted_blocks=1 00:23:02.890 00:23:02.890 ' 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:02.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.890 --rc genhtml_branch_coverage=1 00:23:02.890 --rc genhtml_function_coverage=1 00:23:02.890 --rc genhtml_legend=1 00:23:02.890 --rc geninfo_all_blocks=1 00:23:02.890 --rc geninfo_unexecuted_blocks=1 00:23:02.890 00:23:02.890 ' 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:02.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.890 --rc genhtml_branch_coverage=1 00:23:02.890 --rc genhtml_function_coverage=1 00:23:02.890 --rc genhtml_legend=1 00:23:02.890 --rc geninfo_all_blocks=1 00:23:02.890 --rc geninfo_unexecuted_blocks=1 00:23:02.890 00:23:02.890 ' 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:02.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.890 --rc genhtml_branch_coverage=1 00:23:02.890 --rc genhtml_function_coverage=1 00:23:02.890 --rc genhtml_legend=1 00:23:02.890 --rc geninfo_all_blocks=1 00:23:02.890 --rc geninfo_unexecuted_blocks=1 00:23:02.890 00:23:02.890 ' 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.890 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:02.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:02.891 18:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.181 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:06.182 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:06.182 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:06.182 Found net devices under 0000:84:00.0: cvl_0_0 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:06.182 Found net devices under 0000:84:00.1: cvl_0_1 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:06.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:23:06.182 00:23:06.182 --- 10.0.0.2 ping statistics --- 00:23:06.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.182 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:23:06.182 00:23:06.182 --- 10.0.0.1 ping statistics --- 00:23:06.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.182 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1234699 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1234699 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1234699 ']' 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:06.182 18:33:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.182 [2024-10-08 18:33:34.443478] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:23:06.182 [2024-10-08 18:33:34.443578] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.182 [2024-10-08 18:33:34.562736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.442 [2024-10-08 18:33:34.773444] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.442 [2024-10-08 18:33:34.773555] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.442 [2024-10-08 18:33:34.773591] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.442 [2024-10-08 18:33:34.773622] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.442 [2024-10-08 18:33:34.773647] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.442 [2024-10-08 18:33:34.775049] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.381 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.381 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:07.381 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:07.381 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:07.381 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.381 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.381 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:07.381 18:33:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:07.950 true 00:23:07.950 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:07.950 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:08.210 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:08.210 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:08.210 18:33:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:09.150 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:09.150 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:09.150 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:09.150 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:09.150 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:09.410 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:09.410 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:09.669 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:09.669 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:09.669 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:09.669 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:10.239 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:10.239 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:10.239 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:10.810 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:10.810 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:11.378 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:11.378 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:11.378 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:11.948 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:11.948 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:12.516 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:12.516 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:12.516 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:12.516 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:12.516 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:12.516 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:12.516 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:23:12.516 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:12.516 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:12.516 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:12.516 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:12.516 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:12.516 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:12.516 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:12.516 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:23:12.516 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:12.516 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:12.776 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:12.776 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:12.776 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.WddADpWIw6 00:23:12.776 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:12.776 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.fbTWEGouBg 00:23:12.776 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:12.776 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:12.776 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.WddADpWIw6 00:23:12.776 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.fbTWEGouBg 00:23:12.776 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:13.035 18:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:13.971 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.WddADpWIw6 00:23:13.971 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WddADpWIw6 00:23:13.971 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:14.536 [2024-10-08 18:33:42.881717] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.536 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:15.104 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:16.090 [2024-10-08 18:33:44.266139] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.090 [2024-10-08 18:33:44.266620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.090 18:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:16.709 malloc0 00:23:16.709 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:17.277 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WddADpWIw6 00:23:17.537 18:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:17.796 18:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.WddADpWIw6 00:23:30.012 Initializing NVMe Controllers 00:23:30.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:30.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:30.012 Initialization complete. Launching workers. 00:23:30.012 ======================================================== 00:23:30.012 Latency(us) 00:23:30.012 Device Information : IOPS MiB/s Average min max 00:23:30.012 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3783.54 14.78 16927.54 2858.98 21585.46 00:23:30.012 ======================================================== 00:23:30.012 Total : 3783.54 14.78 16927.54 2858.98 21585.46 00:23:30.012 00:23:30.012 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WddADpWIw6 00:23:30.012 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:30.012 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:30.012 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:30.012 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WddADpWIw6 00:23:30.012 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.012 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1237241 00:23:30.012 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.012 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:30.013 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1237241 /var/tmp/bdevperf.sock 00:23:30.013 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1237241 ']' 00:23:30.013 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.013 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.013 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.013 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.013 18:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.013 [2024-10-08 18:33:56.529088] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:23:30.013 [2024-10-08 18:33:56.529173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237241 ] 00:23:30.013 [2024-10-08 18:33:56.629173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.013 [2024-10-08 18:33:56.848848] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.013 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.013 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:30.013 18:33:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WddADpWIw6 00:23:30.013 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:30.013 [2024-10-08 18:33:58.515447] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.271 TLSTESTn1 00:23:30.271 18:33:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:30.531 Running I/O for 10 seconds... 00:23:32.403 1474.00 IOPS, 5.76 MiB/s [2024-10-08T16:34:01.872Z] 1496.00 IOPS, 5.84 MiB/s [2024-10-08T16:34:03.246Z] 1482.33 IOPS, 5.79 MiB/s [2024-10-08T16:34:04.184Z] 1488.00 IOPS, 5.81 MiB/s [2024-10-08T16:34:05.123Z] 1489.00 IOPS, 5.82 MiB/s [2024-10-08T16:34:06.062Z] 1482.50 IOPS, 5.79 MiB/s [2024-10-08T16:34:07.002Z] 1483.14 IOPS, 5.79 MiB/s [2024-10-08T16:34:07.942Z] 1482.00 IOPS, 5.79 MiB/s [2024-10-08T16:34:08.883Z] 1478.78 IOPS, 5.78 MiB/s [2024-10-08T16:34:09.141Z] 1480.40 IOPS, 5.78 MiB/s 00:23:40.604 Latency(us) 00:23:40.604 [2024-10-08T16:34:09.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.604 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:40.604 Verification LBA range: start 0x0 length 0x2000 00:23:40.604 TLSTESTn1 : 10.05 1485.89 5.80 0.00 0.00 85900.18 15825.73 65244.73 00:23:40.604 [2024-10-08T16:34:09.141Z] =================================================================================================================== 00:23:40.604 [2024-10-08T16:34:09.141Z] Total : 1485.89 5.80 0.00 0.00 85900.18 15825.73 65244.73 00:23:40.604 { 00:23:40.604 "results": [ 00:23:40.604 { 00:23:40.604 "job": "TLSTESTn1", 00:23:40.604 "core_mask": "0x4", 00:23:40.604 "workload": "verify", 00:23:40.604 "status": "finished", 00:23:40.604 "verify_range": { 00:23:40.604 "start": 0, 00:23:40.604 "length": 8192 00:23:40.604 }, 00:23:40.604 "queue_depth": 128, 00:23:40.604 "io_size": 4096, 00:23:40.604 "runtime": 10.048516, 00:23:40.604 "iops": 1485.8910509770797, 00:23:40.604 "mibps": 5.804261917879217, 00:23:40.604 "io_failed": 0, 00:23:40.604 "io_timeout": 0, 00:23:40.604 "avg_latency_us": 85900.18009996602, 00:23:40.604 "min_latency_us": 15825.730370370371, 00:23:40.604 "max_latency_us": 65244.72888888889 00:23:40.604 } 00:23:40.604 ], 00:23:40.604 "core_count": 1 00:23:40.604 } 00:23:40.604 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.604 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1237241 00:23:40.604 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1237241 ']' 00:23:40.604 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1237241 00:23:40.604 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:40.604 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:40.604 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1237241 00:23:40.604 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:40.604 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:40.604 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1237241' 00:23:40.604 killing process with pid 1237241 00:23:40.604 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1237241 00:23:40.604 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.604 00:23:40.604 Latency(us) 00:23:40.604 [2024-10-08T16:34:09.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.604 [2024-10-08T16:34:09.141Z] =================================================================================================================== 00:23:40.604 [2024-10-08T16:34:09.141Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.604 18:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1237241 00:23:40.864 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbTWEGouBg 00:23:40.864 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:40.864 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbTWEGouBg 00:23:40.864 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:40.864 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.864 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:40.864 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.864 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbTWEGouBg 00:23:40.864 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:40.864 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:40.864 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:40.864 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fbTWEGouBg 00:23:41.123 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:41.123 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1238691 00:23:41.123 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:41.123 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:41.123 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1238691 /var/tmp/bdevperf.sock 00:23:41.123 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1238691 ']' 00:23:41.123 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.123 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.123 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.123 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.123 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.123 [2024-10-08 18:34:09.459798] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:23:41.123 [2024-10-08 18:34:09.459898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238691 ] 00:23:41.123 [2024-10-08 18:34:09.534827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.382 [2024-10-08 18:34:09.660933] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.382 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.382 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:41.382 18:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fbTWEGouBg 00:23:41.951 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:42.209 [2024-10-08 18:34:10.722163] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.209 [2024-10-08 18:34:10.733791] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:42.209 [2024-10-08 18:34:10.734033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe529e0 (107): Transport endpoint is not connected 00:23:42.209 [2024-10-08 18:34:10.735001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe529e0 (9): Bad file descriptor 00:23:42.209 [2024-10-08 18:34:10.735993] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:42.209 [2024-10-08 18:34:10.736039] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:42.209 [2024-10-08 18:34:10.736073] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:42.210 [2024-10-08 18:34:10.736134] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:42.210 request: 00:23:42.210 { 00:23:42.210 "name": "TLSTEST", 00:23:42.210 "trtype": "tcp", 00:23:42.210 "traddr": "10.0.0.2", 00:23:42.210 "adrfam": "ipv4", 00:23:42.210 "trsvcid": "4420", 00:23:42.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.210 "prchk_reftag": false, 00:23:42.210 "prchk_guard": false, 00:23:42.210 "hdgst": false, 00:23:42.210 "ddgst": false, 00:23:42.210 "psk": "key0", 00:23:42.210 "allow_unrecognized_csi": false, 00:23:42.210 "method": "bdev_nvme_attach_controller", 00:23:42.210 "req_id": 1 00:23:42.210 } 00:23:42.210 Got JSON-RPC error response 00:23:42.210 response: 00:23:42.210 { 00:23:42.210 "code": -5, 00:23:42.210 "message": "Input/output error" 00:23:42.210 } 00:23:42.469 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1238691 00:23:42.469 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1238691 ']' 00:23:42.469 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1238691 00:23:42.469 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:42.469 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:42.469 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1238691 00:23:42.469 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:42.469 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:42.469 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1238691' 00:23:42.469 killing process with pid 1238691 00:23:42.469 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1238691 00:23:42.469 Received shutdown signal, test time was about 10.000000 seconds 00:23:42.469 00:23:42.469 Latency(us) 00:23:42.469 [2024-10-08T16:34:11.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.470 [2024-10-08T16:34:11.007Z] =================================================================================================================== 00:23:42.470 [2024-10-08T16:34:11.007Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:42.470 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1238691 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WddADpWIw6 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WddADpWIw6 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WddADpWIw6 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WddADpWIw6 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1238929 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1238929 /var/tmp/bdevperf.sock 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1238929 ']' 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:42.728 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.728 [2024-10-08 18:34:11.255276] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:23:42.728 [2024-10-08 18:34:11.255362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238929 ] 00:23:42.988 [2024-10-08 18:34:11.361131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.246 [2024-10-08 18:34:11.574359] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.180 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:44.180 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:44.180 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WddADpWIw6 00:23:44.439 18:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:45.007 [2024-10-08 18:34:13.301033] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.007 [2024-10-08 18:34:13.310702] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:45.007 [2024-10-08 18:34:13.310783] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:45.007 [2024-10-08 18:34:13.310878] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:45.007 [2024-10-08 18:34:13.310960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ef9e0 (107): Transport endpoint is not connected 00:23:45.007 [2024-10-08 18:34:13.311892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ef9e0 (9): Bad file descriptor 00:23:45.007 [2024-10-08 18:34:13.312885] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.007 [2024-10-08 18:34:13.312936] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:45.007 [2024-10-08 18:34:13.312988] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:45.007 [2024-10-08 18:34:13.313037] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.007 request: 00:23:45.007 { 00:23:45.007 "name": "TLSTEST", 00:23:45.007 "trtype": "tcp", 00:23:45.007 "traddr": "10.0.0.2", 00:23:45.007 "adrfam": "ipv4", 00:23:45.007 "trsvcid": "4420", 00:23:45.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.007 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:45.007 "prchk_reftag": false, 00:23:45.007 "prchk_guard": false, 00:23:45.007 "hdgst": false, 00:23:45.007 "ddgst": false, 00:23:45.007 "psk": "key0", 00:23:45.007 "allow_unrecognized_csi": false, 00:23:45.007 "method": "bdev_nvme_attach_controller", 00:23:45.007 "req_id": 1 00:23:45.007 } 00:23:45.007 Got JSON-RPC error response 00:23:45.007 response: 00:23:45.007 { 00:23:45.007 "code": -5, 00:23:45.007 "message": "Input/output error" 00:23:45.007 } 00:23:45.007 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1238929 00:23:45.007 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1238929 ']' 00:23:45.007 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1238929 00:23:45.007 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:45.007 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:45.007 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1238929 00:23:45.007 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:45.007 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:45.007 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1238929' 00:23:45.007 killing process with pid 1238929 00:23:45.007 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1238929 00:23:45.007 Received shutdown signal, test time was about 10.000000 seconds 00:23:45.007 00:23:45.007 Latency(us) 00:23:45.007 [2024-10-08T16:34:13.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.007 [2024-10-08T16:34:13.544Z] =================================================================================================================== 00:23:45.007 [2024-10-08T16:34:13.544Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:45.007 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1238929 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WddADpWIw6 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WddADpWIw6 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WddADpWIw6 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WddADpWIw6 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1239239 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1239239 /var/tmp/bdevperf.sock 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1239239 ']' 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:45.267 18:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.527 [2024-10-08 18:34:13.817939] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:23:45.527 [2024-10-08 18:34:13.818041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239239 ] 00:23:45.527 [2024-10-08 18:34:13.929245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.786 [2024-10-08 18:34:14.114098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.786 18:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:45.786 18:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:45.786 18:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WddADpWIw6 00:23:46.355 18:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:46.614 [2024-10-08 18:34:15.110557] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.614 [2024-10-08 18:34:15.120188] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:46.614 [2024-10-08 18:34:15.120265] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:46.614 [2024-10-08 18:34:15.120357] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:46.614 [2024-10-08 18:34:15.120478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11679e0 (107): Transport endpoint is not connected 00:23:46.615 [2024-10-08 18:34:15.121449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11679e0 (9): Bad file descriptor 00:23:46.615 [2024-10-08 18:34:15.122441] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:46.615 [2024-10-08 18:34:15.122508] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:46.615 [2024-10-08 18:34:15.122545] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:46.615 [2024-10-08 18:34:15.122592] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:46.615 request: 00:23:46.615 { 00:23:46.615 "name": "TLSTEST", 00:23:46.615 "trtype": "tcp", 00:23:46.615 "traddr": "10.0.0.2", 00:23:46.615 "adrfam": "ipv4", 00:23:46.615 "trsvcid": "4420", 00:23:46.615 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:46.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:46.615 "prchk_reftag": false, 00:23:46.615 "prchk_guard": false, 00:23:46.615 "hdgst": false, 00:23:46.615 "ddgst": false, 00:23:46.615 "psk": "key0", 00:23:46.615 "allow_unrecognized_csi": false, 00:23:46.615 "method": "bdev_nvme_attach_controller", 00:23:46.615 "req_id": 1 00:23:46.615 } 00:23:46.615 Got JSON-RPC error response 00:23:46.615 response: 00:23:46.615 { 00:23:46.615 "code": -5, 00:23:46.615 "message": "Input/output error" 00:23:46.615 } 00:23:46.615 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1239239 00:23:46.615 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1239239 ']' 00:23:46.615 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1239239 00:23:46.615 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:46.615 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.615 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1239239 00:23:46.873 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:46.873 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:46.873 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1239239' 00:23:46.873 killing process with pid 1239239 00:23:46.873 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1239239 00:23:46.873 Received shutdown signal, test time was about 10.000000 seconds 00:23:46.873 00:23:46.873 Latency(us) 00:23:46.873 [2024-10-08T16:34:15.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.873 [2024-10-08T16:34:15.410Z] =================================================================================================================== 00:23:46.873 [2024-10-08T16:34:15.410Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:46.873 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1239239 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1239394 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1239394 /var/tmp/bdevperf.sock 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1239394 ']' 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:47.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:47.131 18:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.131 [2024-10-08 18:34:15.642928] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:23:47.131 [2024-10-08 18:34:15.643106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1239394 ] 00:23:47.390 [2024-10-08 18:34:15.735366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.390 [2024-10-08 18:34:15.848008] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.647 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:47.647 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:47.647 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:48.222 [2024-10-08 18:34:16.454770] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:48.222 [2024-10-08 18:34:16.454864] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:48.222 request: 00:23:48.222 { 00:23:48.222 "name": "key0", 00:23:48.222 "path": "", 00:23:48.222 "method": "keyring_file_add_key", 00:23:48.222 "req_id": 1 00:23:48.222 } 00:23:48.222 Got JSON-RPC error response 00:23:48.222 response: 00:23:48.222 { 00:23:48.222 "code": -1, 00:23:48.222 "message": "Operation not permitted" 00:23:48.222 } 00:23:48.222 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:48.480 [2024-10-08 18:34:16.836122] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.480 [2024-10-08 18:34:16.836235] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:48.480 request: 00:23:48.480 { 00:23:48.480 "name": "TLSTEST", 00:23:48.480 "trtype": "tcp", 00:23:48.480 "traddr": "10.0.0.2", 00:23:48.480 "adrfam": "ipv4", 00:23:48.480 "trsvcid": "4420", 00:23:48.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.480 "prchk_reftag": false, 00:23:48.480 "prchk_guard": false, 00:23:48.480 "hdgst": false, 00:23:48.480 "ddgst": false, 00:23:48.480 "psk": "key0", 00:23:48.480 "allow_unrecognized_csi": false, 00:23:48.480 "method": "bdev_nvme_attach_controller", 00:23:48.480 "req_id": 1 00:23:48.480 } 00:23:48.480 Got JSON-RPC error response 00:23:48.480 response: 00:23:48.480 { 00:23:48.480 "code": -126, 00:23:48.480 "message": "Required key not available" 00:23:48.480 } 00:23:48.480 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1239394 00:23:48.480 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1239394 ']' 00:23:48.480 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1239394 00:23:48.480 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:48.480 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.480 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1239394 00:23:48.480 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:48.480 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:48.480 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1239394' 00:23:48.480 killing process with pid 1239394 00:23:48.480 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1239394 00:23:48.480 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.480 00:23:48.480 Latency(us) 00:23:48.480 [2024-10-08T16:34:17.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.480 [2024-10-08T16:34:17.018Z] =================================================================================================================== 00:23:48.481 [2024-10-08T16:34:17.018Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:48.481 18:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1239394 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1234699 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1234699 ']' 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1234699 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1234699 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1234699' 00:23:49.046 killing process with pid 1234699 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1234699 00:23:49.046 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1234699 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.zTtb9KJrKD 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.zTtb9KJrKD 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1239674 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1239674 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1239674 ']' 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.303 18:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.303 [2024-10-08 18:34:17.798627] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:23:49.303 [2024-10-08 18:34:17.798745] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.563 [2024-10-08 18:34:17.867906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.563 [2024-10-08 18:34:17.972664] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.563 [2024-10-08 18:34:17.972722] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.563 [2024-10-08 18:34:17.972751] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.563 [2024-10-08 18:34:17.972763] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.563 [2024-10-08 18:34:17.972774] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.563 [2024-10-08 18:34:17.973479] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.822 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.822 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:49.822 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:49.822 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:49.822 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.822 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.822 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.zTtb9KJrKD 00:23:49.822 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zTtb9KJrKD 00:23:49.822 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:50.391 [2024-10-08 18:34:18.733898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.391 18:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:50.957 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:51.216 [2024-10-08 18:34:19.593094] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.216 [2024-10-08 18:34:19.593540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.216 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:51.786 malloc0 00:23:51.786 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:52.355 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zTtb9KJrKD 00:23:52.614 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zTtb9KJrKD 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zTtb9KJrKD 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1240163 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1240163 /var/tmp/bdevperf.sock 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1240163 ']' 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.554 18:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.554 [2024-10-08 18:34:21.901278] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:23:53.554 [2024-10-08 18:34:21.901380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240163 ] 00:23:53.554 [2024-10-08 18:34:22.005130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.845 [2024-10-08 18:34:22.219616] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.848 18:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.848 18:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:54.848 18:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zTtb9KJrKD 00:23:55.416 18:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.983 [2024-10-08 18:34:24.435014] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.242 TLSTESTn1 00:23:56.242 18:34:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:56.242 Running I/O for 10 seconds... 00:23:58.560 1531.00 IOPS, 5.98 MiB/s [2024-10-08T16:34:28.035Z] 1555.50 IOPS, 6.08 MiB/s [2024-10-08T16:34:28.971Z] 1531.67 IOPS, 5.98 MiB/s [2024-10-08T16:34:29.907Z] 1630.75 IOPS, 6.37 MiB/s [2024-10-08T16:34:30.842Z] 1796.60 IOPS, 7.02 MiB/s [2024-10-08T16:34:31.779Z] 1876.33 IOPS, 7.33 MiB/s [2024-10-08T16:34:32.715Z] 1943.00 IOPS, 7.59 MiB/s [2024-10-08T16:34:34.097Z] 1997.88 IOPS, 7.80 MiB/s [2024-10-08T16:34:35.035Z] 1995.11 IOPS, 7.79 MiB/s [2024-10-08T16:34:35.035Z] 1951.50 IOPS, 7.62 MiB/s 00:24:06.498 Latency(us) 00:24:06.498 [2024-10-08T16:34:35.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.498 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:06.498 Verification LBA range: start 0x0 length 0x2000 00:24:06.498 TLSTESTn1 : 10.04 1955.84 7.64 0.00 0.00 65266.64 10874.12 61749.48 00:24:06.498 [2024-10-08T16:34:35.035Z] =================================================================================================================== 00:24:06.498 [2024-10-08T16:34:35.035Z] Total : 1955.84 7.64 0.00 0.00 65266.64 10874.12 61749.48 00:24:06.498 { 00:24:06.498 "results": [ 00:24:06.498 { 00:24:06.498 "job": "TLSTESTn1", 00:24:06.498 "core_mask": "0x4", 00:24:06.498 "workload": "verify", 00:24:06.498 "status": "finished", 00:24:06.498 "verify_range": { 00:24:06.498 "start": 0, 00:24:06.498 "length": 8192 00:24:06.498 }, 00:24:06.498 "queue_depth": 128, 00:24:06.498 "io_size": 4096, 00:24:06.498 "runtime": 10.042741, 00:24:06.498 "iops": 1955.8405419396956, 00:24:06.498 "mibps": 7.640002116951936, 00:24:06.498 "io_failed": 0, 00:24:06.498 "io_timeout": 0, 00:24:06.498 "avg_latency_us": 65266.63502109991, 00:24:06.498 "min_latency_us": 10874.121481481481, 00:24:06.498 "max_latency_us": 61749.47555555555 00:24:06.498 } 00:24:06.498 ], 00:24:06.498 "core_count": 1 00:24:06.498 } 00:24:06.498 18:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.498 18:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1240163 00:24:06.498 18:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1240163 ']' 00:24:06.498 18:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1240163 00:24:06.498 18:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:06.498 18:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.498 18:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1240163 00:24:06.498 18:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:06.498 18:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:06.498 18:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1240163' 00:24:06.498 killing process with pid 1240163 00:24:06.498 18:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1240163 00:24:06.498 Received shutdown signal, test time was about 10.000000 seconds 00:24:06.498 00:24:06.498 Latency(us) 00:24:06.498 [2024-10-08T16:34:35.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.498 [2024-10-08T16:34:35.035Z] =================================================================================================================== 00:24:06.498 [2024-10-08T16:34:35.035Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.498 18:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1240163 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.zTtb9KJrKD 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zTtb9KJrKD 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zTtb9KJrKD 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zTtb9KJrKD 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zTtb9KJrKD 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1241680 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1241680 /var/tmp/bdevperf.sock 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1241680 ']' 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:07.067 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.067 [2024-10-08 18:34:35.415637] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:24:07.067 [2024-10-08 18:34:35.415858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241680 ] 00:24:07.067 [2024-10-08 18:34:35.547978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.326 [2024-10-08 18:34:35.745824] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.586 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:07.586 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:07.586 18:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zTtb9KJrKD 00:24:08.154 [2024-10-08 18:34:36.539135] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zTtb9KJrKD': 0100666 00:24:08.154 [2024-10-08 18:34:36.539229] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:08.154 request: 00:24:08.154 { 00:24:08.155 "name": "key0", 00:24:08.155 "path": "/tmp/tmp.zTtb9KJrKD", 00:24:08.155 "method": "keyring_file_add_key", 00:24:08.155 "req_id": 1 00:24:08.155 } 00:24:08.155 Got JSON-RPC error response 00:24:08.155 response: 00:24:08.155 { 00:24:08.155 "code": -1, 00:24:08.155 "message": "Operation not permitted" 00:24:08.155 } 00:24:08.155 18:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:08.721 [2024-10-08 18:34:37.157127] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.721 [2024-10-08 18:34:37.157254] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:08.721 request: 00:24:08.721 { 00:24:08.721 "name": "TLSTEST", 00:24:08.721 "trtype": "tcp", 00:24:08.721 "traddr": "10.0.0.2", 00:24:08.721 "adrfam": "ipv4", 00:24:08.721 "trsvcid": "4420", 00:24:08.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:08.721 "prchk_reftag": false, 00:24:08.721 "prchk_guard": false, 00:24:08.721 "hdgst": false, 00:24:08.721 "ddgst": false, 00:24:08.721 "psk": "key0", 00:24:08.721 "allow_unrecognized_csi": false, 00:24:08.721 "method": "bdev_nvme_attach_controller", 00:24:08.721 "req_id": 1 00:24:08.721 } 00:24:08.721 Got JSON-RPC error response 00:24:08.721 response: 00:24:08.721 { 00:24:08.721 "code": -126, 00:24:08.721 "message": "Required key not available" 00:24:08.721 } 00:24:08.721 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1241680 00:24:08.721 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1241680 ']' 00:24:08.721 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1241680 00:24:08.721 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:08.721 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.721 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1241680 00:24:08.980 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:08.980 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:08.980 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1241680' 00:24:08.980 killing process with pid 1241680 00:24:08.980 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1241680 00:24:08.980 Received shutdown signal, test time was about 10.000000 seconds 00:24:08.980 00:24:08.980 Latency(us) 00:24:08.980 [2024-10-08T16:34:37.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.980 [2024-10-08T16:34:37.517Z] =================================================================================================================== 00:24:08.980 [2024-10-08T16:34:37.517Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:08.980 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1241680 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1239674 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1239674 ']' 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1239674 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1239674 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1239674' 00:24:09.238 killing process with pid 1239674 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1239674 00:24:09.238 18:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1239674 00:24:09.805 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:09.805 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:09.805 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:09.805 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.805 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1242047 00:24:09.805 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:09.805 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1242047 00:24:09.805 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1242047 ']' 00:24:09.805 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.805 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:09.805 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.805 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:09.805 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.805 [2024-10-08 18:34:38.330294] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:24:09.805 [2024-10-08 18:34:38.330388] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.064 [2024-10-08 18:34:38.406324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.064 [2024-10-08 18:34:38.531428] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.064 [2024-10-08 18:34:38.531492] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.064 [2024-10-08 18:34:38.531510] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.064 [2024-10-08 18:34:38.531524] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.064 [2024-10-08 18:34:38.531545] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.064 [2024-10-08 18:34:38.532278] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.zTtb9KJrKD 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zTtb9KJrKD 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.zTtb9KJrKD 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zTtb9KJrKD 00:24:10.323 18:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:10.892 [2024-10-08 18:34:39.364074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.892 18:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:11.461 18:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:12.400 [2024-10-08 18:34:40.568858] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:12.400 [2024-10-08 18:34:40.569343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.400 18:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:12.970 malloc0 00:24:12.970 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:13.538 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zTtb9KJrKD 00:24:13.797 [2024-10-08 18:34:42.260481] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zTtb9KJrKD': 0100666 00:24:13.797 [2024-10-08 18:34:42.260570] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:13.797 request: 00:24:13.797 { 00:24:13.797 "name": "key0", 00:24:13.797 "path": "/tmp/tmp.zTtb9KJrKD", 00:24:13.797 "method": "keyring_file_add_key", 00:24:13.797 "req_id": 1 00:24:13.797 } 00:24:13.797 Got JSON-RPC error response 00:24:13.797 response: 00:24:13.797 { 00:24:13.797 "code": -1, 00:24:13.797 "message": "Operation not permitted" 00:24:13.797 } 00:24:13.797 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:14.736 [2024-10-08 18:34:42.954752] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:14.736 [2024-10-08 18:34:42.954828] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:14.736 request: 00:24:14.736 { 00:24:14.736 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.736 "host": "nqn.2016-06.io.spdk:host1", 00:24:14.736 "psk": "key0", 00:24:14.736 "method": "nvmf_subsystem_add_host", 00:24:14.736 "req_id": 1 00:24:14.736 } 00:24:14.736 Got JSON-RPC error response 00:24:14.736 response: 00:24:14.736 { 00:24:14.736 "code": -32603, 00:24:14.736 "message": "Internal error" 00:24:14.736 } 00:24:14.736 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:14.736 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:14.736 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:14.736 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:14.736 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1242047 00:24:14.736 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1242047 ']' 00:24:14.736 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1242047 00:24:14.736 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:14.736 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.736 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1242047 00:24:14.736 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:14.736 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:14.736 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1242047' 00:24:14.736 killing process with pid 1242047 00:24:14.736 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1242047 00:24:14.736 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1242047 00:24:14.994 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.zTtb9KJrKD 00:24:14.994 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:14.994 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:14.994 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:14.994 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.994 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1242636 00:24:14.994 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:14.994 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1242636 00:24:14.994 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1242636 ']' 00:24:14.994 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.994 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.994 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.994 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.995 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.254 [2024-10-08 18:34:43.562430] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:24:15.254 [2024-10-08 18:34:43.562540] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.254 [2024-10-08 18:34:43.686088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.514 [2024-10-08 18:34:43.909598] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.514 [2024-10-08 18:34:43.909730] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.514 [2024-10-08 18:34:43.909768] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.514 [2024-10-08 18:34:43.909814] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.514 [2024-10-08 18:34:43.909870] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.514 [2024-10-08 18:34:43.911210] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.774 18:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:15.774 18:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:15.774 18:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:15.774 18:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:15.774 18:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.774 18:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.774 18:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.zTtb9KJrKD 00:24:15.774 18:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zTtb9KJrKD 00:24:15.774 18:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:16.342 [2024-10-08 18:34:44.744780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.343 18:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:17.278 18:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:17.278 [2024-10-08 18:34:45.808964] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:17.278 [2024-10-08 18:34:45.809415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.538 18:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:18.106 malloc0 00:24:18.106 18:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:18.675 18:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zTtb9KJrKD 00:24:19.612 18:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:19.871 18:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1243186 00:24:19.872 18:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:19.872 18:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:19.872 18:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1243186 /var/tmp/bdevperf.sock 00:24:19.872 18:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1243186 ']' 00:24:19.872 18:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.872 18:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:19.872 18:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.872 18:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:19.872 18:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.872 [2024-10-08 18:34:48.354281] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:24:19.872 [2024-10-08 18:34:48.354397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243186 ] 00:24:20.131 [2024-10-08 18:34:48.467175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.391 [2024-10-08 18:34:48.691153] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.326 18:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.326 18:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:21.326 18:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zTtb9KJrKD 00:24:21.893 18:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:22.152 [2024-10-08 18:34:50.571168] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.152 TLSTESTn1 00:24:22.152 18:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:23.090 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:23.090 "subsystems": [ 00:24:23.090 { 00:24:23.090 "subsystem": "keyring", 00:24:23.090 "config": [ 00:24:23.090 { 00:24:23.090 "method": "keyring_file_add_key", 00:24:23.090 "params": { 00:24:23.090 "name": "key0", 00:24:23.090 "path": "/tmp/tmp.zTtb9KJrKD" 00:24:23.090 } 00:24:23.090 } 00:24:23.090 ] 00:24:23.090 }, 00:24:23.090 { 00:24:23.090 "subsystem": "iobuf", 00:24:23.090 "config": [ 00:24:23.090 { 00:24:23.090 "method": "iobuf_set_options", 00:24:23.090 "params": { 00:24:23.090 "small_pool_count": 8192, 00:24:23.090 "large_pool_count": 1024, 00:24:23.090 "small_bufsize": 8192, 00:24:23.090 "large_bufsize": 135168 00:24:23.090 } 00:24:23.090 } 00:24:23.090 ] 00:24:23.090 }, 00:24:23.091 { 00:24:23.091 "subsystem": "sock", 00:24:23.091 "config": [ 00:24:23.091 { 00:24:23.091 "method": "sock_set_default_impl", 00:24:23.091 "params": { 00:24:23.091 "impl_name": "posix" 00:24:23.091 } 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "method": "sock_impl_set_options", 00:24:23.091 "params": { 00:24:23.091 "impl_name": "ssl", 00:24:23.091 "recv_buf_size": 4096, 00:24:23.091 "send_buf_size": 4096, 00:24:23.091 "enable_recv_pipe": true, 00:24:23.091 "enable_quickack": false, 00:24:23.091 "enable_placement_id": 0, 00:24:23.091 "enable_zerocopy_send_server": true, 00:24:23.091 "enable_zerocopy_send_client": false, 00:24:23.091 "zerocopy_threshold": 0, 00:24:23.091 "tls_version": 0, 00:24:23.091 "enable_ktls": false 00:24:23.091 } 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "method": "sock_impl_set_options", 00:24:23.091 "params": { 00:24:23.091 "impl_name": "posix", 00:24:23.091 "recv_buf_size": 2097152, 00:24:23.091 "send_buf_size": 2097152, 00:24:23.091 "enable_recv_pipe": true, 00:24:23.091 "enable_quickack": false, 00:24:23.091 "enable_placement_id": 0, 00:24:23.091 "enable_zerocopy_send_server": true, 00:24:23.091 "enable_zerocopy_send_client": false, 00:24:23.091 "zerocopy_threshold": 0, 00:24:23.091 "tls_version": 0, 00:24:23.091 "enable_ktls": false 00:24:23.091 } 00:24:23.091 } 00:24:23.091 ] 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "subsystem": "vmd", 00:24:23.091 "config": [] 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "subsystem": "accel", 00:24:23.091 "config": [ 00:24:23.091 { 00:24:23.091 "method": "accel_set_options", 00:24:23.091 "params": { 00:24:23.091 "small_cache_size": 128, 00:24:23.091 "large_cache_size": 16, 00:24:23.091 "task_count": 2048, 00:24:23.091 "sequence_count": 2048, 00:24:23.091 "buf_count": 2048 00:24:23.091 } 00:24:23.091 } 00:24:23.091 ] 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "subsystem": "bdev", 00:24:23.091 "config": [ 00:24:23.091 { 00:24:23.091 "method": "bdev_set_options", 00:24:23.091 "params": { 00:24:23.091 "bdev_io_pool_size": 65535, 00:24:23.091 "bdev_io_cache_size": 256, 00:24:23.091 "bdev_auto_examine": true, 00:24:23.091 "iobuf_small_cache_size": 128, 00:24:23.091 "iobuf_large_cache_size": 16 00:24:23.091 } 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "method": "bdev_raid_set_options", 00:24:23.091 "params": { 00:24:23.091 "process_window_size_kb": 1024, 00:24:23.091 "process_max_bandwidth_mb_sec": 0 00:24:23.091 } 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "method": "bdev_iscsi_set_options", 00:24:23.091 "params": { 00:24:23.091 "timeout_sec": 30 00:24:23.091 } 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "method": "bdev_nvme_set_options", 00:24:23.091 "params": { 00:24:23.091 "action_on_timeout": "none", 00:24:23.091 "timeout_us": 0, 00:24:23.091 "timeout_admin_us": 0, 00:24:23.091 "keep_alive_timeout_ms": 10000, 00:24:23.091 "arbitration_burst": 0, 00:24:23.091 "low_priority_weight": 0, 00:24:23.091 "medium_priority_weight": 0, 00:24:23.091 "high_priority_weight": 0, 00:24:23.091 "nvme_adminq_poll_period_us": 10000, 00:24:23.091 "nvme_ioq_poll_period_us": 0, 00:24:23.091 "io_queue_requests": 0, 00:24:23.091 "delay_cmd_submit": true, 00:24:23.091 "transport_retry_count": 4, 00:24:23.091 "bdev_retry_count": 3, 00:24:23.091 "transport_ack_timeout": 0, 00:24:23.091 "ctrlr_loss_timeout_sec": 0, 00:24:23.091 "reconnect_delay_sec": 0, 00:24:23.091 "fast_io_fail_timeout_sec": 0, 00:24:23.091 "disable_auto_failback": false, 00:24:23.091 "generate_uuids": false, 00:24:23.091 "transport_tos": 0, 00:24:23.091 "nvme_error_stat": false, 00:24:23.091 "rdma_srq_size": 0, 00:24:23.091 "io_path_stat": false, 00:24:23.091 "allow_accel_sequence": false, 00:24:23.091 "rdma_max_cq_size": 0, 00:24:23.091 "rdma_cm_event_timeout_ms": 0, 00:24:23.091 "dhchap_digests": [ 00:24:23.091 "sha256", 00:24:23.091 "sha384", 00:24:23.091 "sha512" 00:24:23.091 ], 00:24:23.091 "dhchap_dhgroups": [ 00:24:23.091 "null", 00:24:23.091 "ffdhe2048", 00:24:23.091 "ffdhe3072", 00:24:23.091 "ffdhe4096", 00:24:23.091 "ffdhe6144", 00:24:23.091 "ffdhe8192" 00:24:23.091 ] 00:24:23.091 } 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "method": "bdev_nvme_set_hotplug", 00:24:23.091 "params": { 00:24:23.091 "period_us": 100000, 00:24:23.091 "enable": false 00:24:23.091 } 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "method": "bdev_malloc_create", 00:24:23.091 "params": { 00:24:23.091 "name": "malloc0", 00:24:23.091 "num_blocks": 8192, 00:24:23.091 "block_size": 4096, 00:24:23.091 "physical_block_size": 4096, 00:24:23.091 "uuid": "0f059714-2ba3-4bad-b132-7bf8c556085c", 00:24:23.091 "optimal_io_boundary": 0, 00:24:23.091 "md_size": 0, 00:24:23.091 "dif_type": 0, 00:24:23.091 "dif_is_head_of_md": false, 00:24:23.091 "dif_pi_format": 0 00:24:23.091 } 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "method": "bdev_wait_for_examine" 00:24:23.091 } 00:24:23.091 ] 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "subsystem": "nbd", 00:24:23.091 "config": [] 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "subsystem": "scheduler", 00:24:23.091 "config": [ 00:24:23.091 { 00:24:23.091 "method": "framework_set_scheduler", 00:24:23.091 "params": { 00:24:23.091 "name": "static" 00:24:23.091 } 00:24:23.091 } 00:24:23.091 ] 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "subsystem": "nvmf", 00:24:23.091 "config": [ 00:24:23.091 { 00:24:23.091 "method": "nvmf_set_config", 00:24:23.091 "params": { 00:24:23.091 "discovery_filter": "match_any", 00:24:23.091 "admin_cmd_passthru": { 00:24:23.091 "identify_ctrlr": false 00:24:23.091 }, 00:24:23.091 "dhchap_digests": [ 00:24:23.091 "sha256", 00:24:23.091 "sha384", 00:24:23.091 "sha512" 00:24:23.091 ], 00:24:23.091 "dhchap_dhgroups": [ 00:24:23.091 "null", 00:24:23.091 "ffdhe2048", 00:24:23.091 "ffdhe3072", 00:24:23.091 "ffdhe4096", 00:24:23.091 "ffdhe6144", 00:24:23.091 "ffdhe8192" 00:24:23.091 ] 00:24:23.091 } 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "method": "nvmf_set_max_subsystems", 00:24:23.091 "params": { 00:24:23.091 "max_subsystems": 1024 00:24:23.091 } 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "method": "nvmf_set_crdt", 00:24:23.091 "params": { 00:24:23.091 "crdt1": 0, 00:24:23.091 "crdt2": 0, 00:24:23.091 "crdt3": 0 00:24:23.091 } 00:24:23.091 }, 00:24:23.091 { 00:24:23.091 "method": "nvmf_create_transport", 00:24:23.091 "params": { 00:24:23.091 "trtype": "TCP", 00:24:23.091 "max_queue_depth": 128, 00:24:23.091 "max_io_qpairs_per_ctrlr": 127, 00:24:23.091 "in_capsule_data_size": 4096, 00:24:23.091 "max_io_size": 131072, 00:24:23.091 "io_unit_size": 131072, 00:24:23.091 "max_aq_depth": 128, 00:24:23.091 "num_shared_buffers": 511, 00:24:23.091 "buf_cache_size": 4294967295, 00:24:23.091 "dif_insert_or_strip": false, 00:24:23.091 "zcopy": false, 00:24:23.091 "c2h_success": false, 00:24:23.091 "sock_priority": 0, 00:24:23.091 "abort_timeout_sec": 1, 00:24:23.091 "ack_timeout": 0, 00:24:23.091 "data_wr_pool_size": 0 00:24:23.092 } 00:24:23.092 }, 00:24:23.092 { 00:24:23.092 "method": "nvmf_create_subsystem", 00:24:23.092 "params": { 00:24:23.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.092 "allow_any_host": false, 00:24:23.092 "serial_number": "SPDK00000000000001", 00:24:23.092 "model_number": "SPDK bdev Controller", 00:24:23.092 "max_namespaces": 10, 00:24:23.092 "min_cntlid": 1, 00:24:23.092 "max_cntlid": 65519, 00:24:23.092 "ana_reporting": false 00:24:23.092 } 00:24:23.092 }, 00:24:23.092 { 00:24:23.092 "method": "nvmf_subsystem_add_host", 00:24:23.092 "params": { 00:24:23.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.092 "host": "nqn.2016-06.io.spdk:host1", 00:24:23.092 "psk": "key0" 00:24:23.092 } 00:24:23.092 }, 00:24:23.092 { 00:24:23.092 "method": "nvmf_subsystem_add_ns", 00:24:23.092 "params": { 00:24:23.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.092 "namespace": { 00:24:23.092 "nsid": 1, 00:24:23.092 "bdev_name": "malloc0", 00:24:23.092 "nguid": "0F0597142BA34BADB1327BF8C556085C", 00:24:23.092 "uuid": "0f059714-2ba3-4bad-b132-7bf8c556085c", 00:24:23.092 "no_auto_visible": false 00:24:23.092 } 00:24:23.092 } 00:24:23.092 }, 00:24:23.092 { 00:24:23.092 "method": "nvmf_subsystem_add_listener", 00:24:23.092 "params": { 00:24:23.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.092 "listen_address": { 00:24:23.092 "trtype": "TCP", 00:24:23.092 "adrfam": "IPv4", 00:24:23.092 "traddr": "10.0.0.2", 00:24:23.092 "trsvcid": "4420" 00:24:23.092 }, 00:24:23.092 "secure_channel": true 00:24:23.092 } 00:24:23.092 } 00:24:23.092 ] 00:24:23.092 } 00:24:23.092 ] 00:24:23.092 }' 00:24:23.092 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:23.350 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:23.350 "subsystems": [ 00:24:23.350 { 00:24:23.350 "subsystem": "keyring", 00:24:23.350 "config": [ 00:24:23.350 { 00:24:23.350 "method": "keyring_file_add_key", 00:24:23.350 "params": { 00:24:23.350 "name": "key0", 00:24:23.350 "path": "/tmp/tmp.zTtb9KJrKD" 00:24:23.350 } 00:24:23.350 } 00:24:23.350 ] 00:24:23.350 }, 00:24:23.350 { 00:24:23.350 "subsystem": "iobuf", 00:24:23.350 "config": [ 00:24:23.350 { 00:24:23.350 "method": "iobuf_set_options", 00:24:23.350 "params": { 00:24:23.350 "small_pool_count": 8192, 00:24:23.351 "large_pool_count": 1024, 00:24:23.351 "small_bufsize": 8192, 00:24:23.351 "large_bufsize": 135168 00:24:23.351 } 00:24:23.351 } 00:24:23.351 ] 00:24:23.351 }, 00:24:23.351 { 00:24:23.351 "subsystem": "sock", 00:24:23.351 "config": [ 00:24:23.351 { 00:24:23.351 "method": "sock_set_default_impl", 00:24:23.351 "params": { 00:24:23.351 "impl_name": "posix" 00:24:23.351 } 00:24:23.351 }, 00:24:23.351 { 00:24:23.351 "method": "sock_impl_set_options", 00:24:23.351 "params": { 00:24:23.351 "impl_name": "ssl", 00:24:23.351 "recv_buf_size": 4096, 00:24:23.351 "send_buf_size": 4096, 00:24:23.351 "enable_recv_pipe": true, 00:24:23.351 "enable_quickack": false, 00:24:23.351 "enable_placement_id": 0, 00:24:23.351 "enable_zerocopy_send_server": true, 00:24:23.351 "enable_zerocopy_send_client": false, 00:24:23.351 "zerocopy_threshold": 0, 00:24:23.351 "tls_version": 0, 00:24:23.351 "enable_ktls": false 00:24:23.351 } 00:24:23.351 }, 00:24:23.351 { 00:24:23.351 "method": "sock_impl_set_options", 00:24:23.351 "params": { 00:24:23.351 "impl_name": "posix", 00:24:23.351 "recv_buf_size": 2097152, 00:24:23.351 "send_buf_size": 2097152, 00:24:23.351 "enable_recv_pipe": true, 00:24:23.351 "enable_quickack": false, 00:24:23.351 "enable_placement_id": 0, 00:24:23.351 "enable_zerocopy_send_server": true, 00:24:23.351 "enable_zerocopy_send_client": false, 00:24:23.351 "zerocopy_threshold": 0, 00:24:23.351 "tls_version": 0, 00:24:23.351 "enable_ktls": false 00:24:23.351 } 00:24:23.351 } 00:24:23.351 ] 00:24:23.351 }, 00:24:23.351 { 00:24:23.351 "subsystem": "vmd", 00:24:23.351 "config": [] 00:24:23.351 }, 00:24:23.351 { 00:24:23.351 "subsystem": "accel", 00:24:23.351 "config": [ 00:24:23.351 { 00:24:23.351 "method": "accel_set_options", 00:24:23.351 "params": { 00:24:23.351 "small_cache_size": 128, 00:24:23.351 "large_cache_size": 16, 00:24:23.351 "task_count": 2048, 00:24:23.351 "sequence_count": 2048, 00:24:23.351 "buf_count": 2048 00:24:23.351 } 00:24:23.351 } 00:24:23.351 ] 00:24:23.351 }, 00:24:23.351 { 00:24:23.351 "subsystem": "bdev", 00:24:23.351 "config": [ 00:24:23.351 { 00:24:23.351 "method": "bdev_set_options", 00:24:23.351 "params": { 00:24:23.351 "bdev_io_pool_size": 65535, 00:24:23.351 "bdev_io_cache_size": 256, 00:24:23.351 "bdev_auto_examine": true, 00:24:23.351 "iobuf_small_cache_size": 128, 00:24:23.351 "iobuf_large_cache_size": 16 00:24:23.351 } 00:24:23.351 }, 00:24:23.351 { 00:24:23.351 "method": "bdev_raid_set_options", 00:24:23.351 "params": { 00:24:23.351 "process_window_size_kb": 1024, 00:24:23.351 "process_max_bandwidth_mb_sec": 0 00:24:23.351 } 00:24:23.351 }, 00:24:23.351 { 00:24:23.351 "method": "bdev_iscsi_set_options", 00:24:23.351 "params": { 00:24:23.351 "timeout_sec": 30 00:24:23.351 } 00:24:23.351 }, 00:24:23.351 { 00:24:23.351 "method": "bdev_nvme_set_options", 00:24:23.351 "params": { 00:24:23.351 "action_on_timeout": "none", 00:24:23.351 "timeout_us": 0, 00:24:23.351 "timeout_admin_us": 0, 00:24:23.351 "keep_alive_timeout_ms": 10000, 00:24:23.351 "arbitration_burst": 0, 00:24:23.351 "low_priority_weight": 0, 00:24:23.351 "medium_priority_weight": 0, 00:24:23.351 "high_priority_weight": 0, 00:24:23.351 "nvme_adminq_poll_period_us": 10000, 00:24:23.351 "nvme_ioq_poll_period_us": 0, 00:24:23.351 "io_queue_requests": 512, 00:24:23.351 "delay_cmd_submit": true, 00:24:23.351 "transport_retry_count": 4, 00:24:23.351 "bdev_retry_count": 3, 00:24:23.351 "transport_ack_timeout": 0, 00:24:23.351 "ctrlr_loss_timeout_sec": 0, 00:24:23.351 "reconnect_delay_sec": 0, 00:24:23.351 "fast_io_fail_timeout_sec": 0, 00:24:23.351 "disable_auto_failback": false, 00:24:23.351 "generate_uuids": false, 00:24:23.351 "transport_tos": 0, 00:24:23.351 "nvme_error_stat": false, 00:24:23.351 "rdma_srq_size": 0, 00:24:23.351 "io_path_stat": false, 00:24:23.351 "allow_accel_sequence": false, 00:24:23.351 "rdma_max_cq_size": 0, 00:24:23.351 "rdma_cm_event_timeout_ms": 0, 00:24:23.351 "dhchap_digests": [ 00:24:23.351 "sha256", 00:24:23.351 "sha384", 00:24:23.351 "sha512" 00:24:23.351 ], 00:24:23.351 "dhchap_dhgroups": [ 00:24:23.351 "null", 00:24:23.351 "ffdhe2048", 00:24:23.351 "ffdhe3072", 00:24:23.351 "ffdhe4096", 00:24:23.351 "ffdhe6144", 00:24:23.351 "ffdhe8192" 00:24:23.351 ] 00:24:23.351 } 00:24:23.351 }, 00:24:23.351 { 00:24:23.351 "method": "bdev_nvme_attach_controller", 00:24:23.351 "params": { 00:24:23.351 "name": "TLSTEST", 00:24:23.351 "trtype": "TCP", 00:24:23.351 "adrfam": "IPv4", 00:24:23.351 "traddr": "10.0.0.2", 00:24:23.351 "trsvcid": "4420", 00:24:23.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.351 "prchk_reftag": false, 00:24:23.351 "prchk_guard": false, 00:24:23.351 "ctrlr_loss_timeout_sec": 0, 00:24:23.351 "reconnect_delay_sec": 0, 00:24:23.351 "fast_io_fail_timeout_sec": 0, 00:24:23.351 "psk": "key0", 00:24:23.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:23.351 "hdgst": false, 00:24:23.351 "ddgst": false, 00:24:23.351 "multipath": "multipath" 00:24:23.351 } 00:24:23.351 }, 00:24:23.351 { 00:24:23.351 "method": "bdev_nvme_set_hotplug", 00:24:23.351 "params": { 00:24:23.351 "period_us": 100000, 00:24:23.351 "enable": false 00:24:23.351 } 00:24:23.351 }, 00:24:23.351 { 00:24:23.351 "method": "bdev_wait_for_examine" 00:24:23.351 } 00:24:23.351 ] 00:24:23.351 }, 00:24:23.351 { 00:24:23.351 "subsystem": "nbd", 00:24:23.351 "config": [] 00:24:23.351 } 00:24:23.351 ] 00:24:23.351 }' 00:24:23.351 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1243186 00:24:23.351 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1243186 ']' 00:24:23.351 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1243186 00:24:23.351 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:23.351 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:23.351 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1243186 00:24:23.351 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:23.351 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:23.351 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1243186' 00:24:23.351 killing process with pid 1243186 00:24:23.351 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1243186 00:24:23.351 Received shutdown signal, test time was about 10.000000 seconds 00:24:23.351 00:24:23.351 Latency(us) 00:24:23.351 [2024-10-08T16:34:51.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.351 [2024-10-08T16:34:51.888Z] =================================================================================================================== 00:24:23.351 [2024-10-08T16:34:51.888Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:23.351 18:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1243186 00:24:23.917 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1242636 00:24:23.917 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1242636 ']' 00:24:23.917 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1242636 00:24:23.917 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:23.917 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:23.917 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1242636 00:24:23.917 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:23.917 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:23.917 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1242636' 00:24:23.917 killing process with pid 1242636 00:24:23.917 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1242636 00:24:23.917 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1242636 00:24:24.487 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:24.487 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:24.488 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:24.488 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.488 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:24.488 "subsystems": [ 00:24:24.488 { 00:24:24.488 "subsystem": "keyring", 00:24:24.488 "config": [ 00:24:24.488 { 00:24:24.488 "method": "keyring_file_add_key", 00:24:24.488 "params": { 00:24:24.488 "name": "key0", 00:24:24.488 "path": "/tmp/tmp.zTtb9KJrKD" 00:24:24.488 } 00:24:24.488 } 00:24:24.488 ] 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "subsystem": "iobuf", 00:24:24.488 "config": [ 00:24:24.488 { 00:24:24.488 "method": "iobuf_set_options", 00:24:24.488 "params": { 00:24:24.488 "small_pool_count": 8192, 00:24:24.488 "large_pool_count": 1024, 00:24:24.488 "small_bufsize": 8192, 00:24:24.488 "large_bufsize": 135168 00:24:24.488 } 00:24:24.488 } 00:24:24.488 ] 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "subsystem": "sock", 00:24:24.488 "config": [ 00:24:24.488 { 00:24:24.488 "method": "sock_set_default_impl", 00:24:24.488 "params": { 00:24:24.488 "impl_name": "posix" 00:24:24.488 } 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "method": "sock_impl_set_options", 00:24:24.488 "params": { 00:24:24.488 "impl_name": "ssl", 00:24:24.488 "recv_buf_size": 4096, 00:24:24.488 "send_buf_size": 4096, 00:24:24.488 "enable_recv_pipe": true, 00:24:24.488 "enable_quickack": false, 00:24:24.488 "enable_placement_id": 0, 00:24:24.488 "enable_zerocopy_send_server": true, 00:24:24.488 "enable_zerocopy_send_client": false, 00:24:24.488 "zerocopy_threshold": 0, 00:24:24.488 "tls_version": 0, 00:24:24.488 "enable_ktls": false 00:24:24.488 } 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "method": "sock_impl_set_options", 00:24:24.488 "params": { 00:24:24.488 "impl_name": "posix", 00:24:24.488 "recv_buf_size": 2097152, 00:24:24.488 "send_buf_size": 2097152, 00:24:24.488 "enable_recv_pipe": true, 00:24:24.488 "enable_quickack": false, 00:24:24.488 "enable_placement_id": 0, 00:24:24.488 "enable_zerocopy_send_server": true, 00:24:24.488 "enable_zerocopy_send_client": false, 00:24:24.488 "zerocopy_threshold": 0, 00:24:24.488 "tls_version": 0, 00:24:24.488 "enable_ktls": false 00:24:24.488 } 00:24:24.488 } 00:24:24.488 ] 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "subsystem": "vmd", 00:24:24.488 "config": [] 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "subsystem": "accel", 00:24:24.488 "config": [ 00:24:24.488 { 00:24:24.488 "method": "accel_set_options", 00:24:24.488 "params": { 00:24:24.488 "small_cache_size": 128, 00:24:24.488 "large_cache_size": 16, 00:24:24.488 "task_count": 2048, 00:24:24.488 "sequence_count": 2048, 00:24:24.488 "buf_count": 2048 00:24:24.488 } 00:24:24.488 } 00:24:24.488 ] 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "subsystem": "bdev", 00:24:24.488 "config": [ 00:24:24.488 { 00:24:24.488 "method": "bdev_set_options", 00:24:24.488 "params": { 00:24:24.488 "bdev_io_pool_size": 65535, 00:24:24.488 "bdev_io_cache_size": 256, 00:24:24.488 "bdev_auto_examine": true, 00:24:24.488 "iobuf_small_cache_size": 128, 00:24:24.488 "iobuf_large_cache_size": 16 00:24:24.488 } 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "method": "bdev_raid_set_options", 00:24:24.488 "params": { 00:24:24.488 "process_window_size_kb": 1024, 00:24:24.488 "process_max_bandwidth_mb_sec": 0 00:24:24.488 } 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "method": "bdev_iscsi_set_options", 00:24:24.488 "params": { 00:24:24.488 "timeout_sec": 30 00:24:24.488 } 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "method": "bdev_nvme_set_options", 00:24:24.488 "params": { 00:24:24.488 "action_on_timeout": "none", 00:24:24.488 "timeout_us": 0, 00:24:24.488 "timeout_admin_us": 0, 00:24:24.488 "keep_alive_timeout_ms": 10000, 00:24:24.488 "arbitration_burst": 0, 00:24:24.488 "low_priority_weight": 0, 00:24:24.488 "medium_priority_weight": 0, 00:24:24.488 "high_priority_weight": 0, 00:24:24.488 "nvme_adminq_poll_period_us": 10000, 00:24:24.488 "nvme_ioq_poll_period_us": 0, 00:24:24.488 "io_queue_requests": 0, 00:24:24.488 "delay_cmd_submit": true, 00:24:24.488 "transport_retry_count": 4, 00:24:24.488 "bdev_retry_count": 3, 00:24:24.488 "transport_ack_timeout": 0, 00:24:24.488 "ctrlr_loss_timeout_sec": 0, 00:24:24.488 "reconnect_delay_sec": 0, 00:24:24.488 "fast_io_fail_timeout_sec": 0, 00:24:24.488 "disable_auto_failback": false, 00:24:24.488 "generate_uuids": false, 00:24:24.488 "transport_tos": 0, 00:24:24.488 "nvme_error_stat": false, 00:24:24.488 "rdma_srq_size": 0, 00:24:24.488 "io_path_stat": false, 00:24:24.488 "allow_accel_sequence": false, 00:24:24.488 "rdma_max_cq_size": 0, 00:24:24.488 "rdma_cm_event_timeout_ms": 0, 00:24:24.488 "dhchap_digests": [ 00:24:24.488 "sha256", 00:24:24.488 "sha384", 00:24:24.488 "sha512" 00:24:24.488 ], 00:24:24.488 "dhchap_dhgroups": [ 00:24:24.488 "null", 00:24:24.488 "ffdhe2048", 00:24:24.488 "ffdhe3072", 00:24:24.488 "ffdhe4096", 00:24:24.488 "ffdhe6144", 00:24:24.488 "ffdhe8192" 00:24:24.488 ] 00:24:24.488 } 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "method": "bdev_nvme_set_hotplug", 00:24:24.488 "params": { 00:24:24.488 "period_us": 100000, 00:24:24.488 "enable": false 00:24:24.488 } 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "method": "bdev_malloc_create", 00:24:24.488 "params": { 00:24:24.488 "name": "malloc0", 00:24:24.488 "num_blocks": 8192, 00:24:24.488 "block_size": 4096, 00:24:24.488 "physical_block_size": 4096, 00:24:24.488 "uuid": "0f059714-2ba3-4bad-b132-7bf8c556085c", 00:24:24.488 "optimal_io_boundary": 0, 00:24:24.488 "md_size": 0, 00:24:24.488 "dif_type": 0, 00:24:24.488 "dif_is_head_of_md": false, 00:24:24.488 "dif_pi_format": 0 00:24:24.488 } 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "method": "bdev_wait_for_examine" 00:24:24.488 } 00:24:24.488 ] 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "subsystem": "nbd", 00:24:24.488 "config": [] 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "subsystem": "scheduler", 00:24:24.488 "config": [ 00:24:24.488 { 00:24:24.488 "method": "framework_set_scheduler", 00:24:24.488 "params": { 00:24:24.488 "name": "static" 00:24:24.488 } 00:24:24.488 } 00:24:24.488 ] 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "subsystem": "nvmf", 00:24:24.488 "config": [ 00:24:24.488 { 00:24:24.488 "method": "nvmf_set_config", 00:24:24.488 "params": { 00:24:24.488 "discovery_filter": "match_any", 00:24:24.488 "admin_cmd_passthru": { 00:24:24.488 "identify_ctrlr": false 00:24:24.488 }, 00:24:24.488 "dhchap_digests": [ 00:24:24.488 "sha256", 00:24:24.488 "sha384", 00:24:24.488 "sha512" 00:24:24.488 ], 00:24:24.488 "dhchap_dhgroups": [ 00:24:24.488 "null", 00:24:24.488 "ffdhe2048", 00:24:24.488 "ffdhe3072", 00:24:24.488 "ffdhe4096", 00:24:24.488 "ffdhe6144", 00:24:24.488 "ffdhe8192" 00:24:24.488 ] 00:24:24.488 } 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "method": "nvmf_set_max_subsystems", 00:24:24.488 "params": { 00:24:24.488 "max_subsystems": 1024 00:24:24.488 } 00:24:24.488 }, 00:24:24.488 { 00:24:24.488 "method": "nvmf_set_crdt", 00:24:24.488 "params": { 00:24:24.488 "crdt1": 0, 00:24:24.488 "crdt2": 0, 00:24:24.488 "crdt3": 0 00:24:24.489 } 00:24:24.489 }, 00:24:24.489 { 00:24:24.489 "method": "nvmf_create_transport", 00:24:24.489 "params": { 00:24:24.489 "trtype": "TCP", 00:24:24.489 "max_queue_depth": 128, 00:24:24.489 "max_io_qpairs_per_ctrlr": 127, 00:24:24.489 "in_capsule_data_size": 4096, 00:24:24.489 "max_io_size": 131072, 00:24:24.489 "io_unit_size": 131072, 00:24:24.489 "max_aq_depth": 128, 00:24:24.489 "num_shared_buffers": 511, 00:24:24.489 "buf_cache_size": 4294967295, 00:24:24.489 "dif_insert_or_strip": false, 00:24:24.489 "zcopy": false, 00:24:24.489 "c2h_success": false, 00:24:24.489 "sock_priority": 0, 00:24:24.489 "abort_timeout_sec": 1, 00:24:24.489 "ack_timeout": 0, 00:24:24.489 "data_wr_pool_size": 0 00:24:24.489 } 00:24:24.489 }, 00:24:24.489 { 00:24:24.489 "method": "nvmf_create_subsystem", 00:24:24.489 "params": { 00:24:24.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.489 "allow_any_host": false, 00:24:24.489 "serial_number": "SPDK00000000000001", 00:24:24.489 "model_number": "SPDK bdev Controller", 00:24:24.489 "max_namespaces": 10, 00:24:24.489 "min_cntlid": 1, 00:24:24.489 "max_cntlid": 65519, 00:24:24.489 "ana_reporting": false 00:24:24.489 } 00:24:24.489 }, 00:24:24.489 { 00:24:24.489 "method": "nvmf_subsystem_add_host", 00:24:24.489 "params": { 00:24:24.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.489 "host": "nqn.2016-06.io.spdk:host1", 00:24:24.489 "psk": "key0" 00:24:24.489 } 00:24:24.489 }, 00:24:24.489 { 00:24:24.489 "method": "nvmf_subsystem_add_ns", 00:24:24.489 "params": { 00:24:24.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.489 "namespace": { 00:24:24.489 "nsid": 1, 00:24:24.489 "bdev_name": "malloc0", 00:24:24.489 "nguid": "0F0597142BA34BADB1327BF8C556085C", 00:24:24.489 "uuid": "0f059714-2ba3-4bad-b132-7bf8c556085c", 00:24:24.489 "no_auto_visible": false 00:24:24.489 } 00:24:24.489 } 00:24:24.489 }, 00:24:24.489 { 00:24:24.489 "method": "nvmf_subsystem_add_listener", 00:24:24.489 "params": { 00:24:24.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.489 "listen_address": { 00:24:24.489 "trtype": "TCP", 00:24:24.489 "adrfam": "IPv4", 00:24:24.489 "traddr": "10.0.0.2", 00:24:24.489 "trsvcid": "4420" 00:24:24.489 }, 00:24:24.489 "secure_channel": true 00:24:24.489 } 00:24:24.489 } 00:24:24.489 ] 00:24:24.489 } 00:24:24.489 ] 00:24:24.489 }' 00:24:24.489 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1243723 00:24:24.489 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:24.489 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1243723 00:24:24.489 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1243723 ']' 00:24:24.489 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.489 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:24.489 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.489 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:24.489 18:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.489 [2024-10-08 18:34:52.805459] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:24:24.489 [2024-10-08 18:34:52.805562] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.489 [2024-10-08 18:34:52.965521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.749 [2024-10-08 18:34:53.259779] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.749 [2024-10-08 18:34:53.259912] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.749 [2024-10-08 18:34:53.259968] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.749 [2024-10-08 18:34:53.260017] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.749 [2024-10-08 18:34:53.260059] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.749 [2024-10-08 18:34:53.262021] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.320 [2024-10-08 18:34:53.648186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.320 [2024-10-08 18:34:53.682717] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:25.320 [2024-10-08 18:34:53.683193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1243877 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1243877 /var/tmp/bdevperf.sock 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1243877 ']' 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:25.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:25.580 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:25.580 "subsystems": [ 00:24:25.580 { 00:24:25.580 "subsystem": "keyring", 00:24:25.580 "config": [ 00:24:25.580 { 00:24:25.580 "method": "keyring_file_add_key", 00:24:25.580 "params": { 00:24:25.580 "name": "key0", 00:24:25.580 "path": "/tmp/tmp.zTtb9KJrKD" 00:24:25.580 } 00:24:25.580 } 00:24:25.580 ] 00:24:25.580 }, 00:24:25.580 { 00:24:25.580 "subsystem": "iobuf", 00:24:25.580 "config": [ 00:24:25.580 { 00:24:25.580 "method": "iobuf_set_options", 00:24:25.580 "params": { 00:24:25.580 "small_pool_count": 8192, 00:24:25.580 "large_pool_count": 1024, 00:24:25.580 "small_bufsize": 8192, 00:24:25.580 "large_bufsize": 135168 00:24:25.580 } 00:24:25.580 } 00:24:25.580 ] 00:24:25.580 }, 00:24:25.580 { 00:24:25.580 "subsystem": "sock", 00:24:25.580 "config": [ 00:24:25.580 { 00:24:25.580 "method": "sock_set_default_impl", 00:24:25.580 "params": { 00:24:25.580 "impl_name": "posix" 00:24:25.580 } 00:24:25.580 }, 00:24:25.580 { 00:24:25.580 "method": "sock_impl_set_options", 00:24:25.580 "params": { 00:24:25.580 "impl_name": "ssl", 00:24:25.580 "recv_buf_size": 4096, 00:24:25.580 "send_buf_size": 4096, 00:24:25.580 "enable_recv_pipe": true, 00:24:25.580 "enable_quickack": false, 00:24:25.580 "enable_placement_id": 0, 00:24:25.580 "enable_zerocopy_send_server": true, 00:24:25.580 "enable_zerocopy_send_client": false, 00:24:25.580 "zerocopy_threshold": 0, 00:24:25.580 "tls_version": 0, 00:24:25.580 "enable_ktls": false 00:24:25.580 } 00:24:25.580 }, 00:24:25.580 { 00:24:25.580 "method": "sock_impl_set_options", 00:24:25.580 "params": { 00:24:25.580 "impl_name": "posix", 00:24:25.580 "recv_buf_size": 2097152, 00:24:25.580 "send_buf_size": 2097152, 00:24:25.580 "enable_recv_pipe": true, 00:24:25.580 "enable_quickack": false, 00:24:25.580 "enable_placement_id": 0, 00:24:25.580 "enable_zerocopy_send_server": true, 00:24:25.580 "enable_zerocopy_send_client": false, 00:24:25.580 "zerocopy_threshold": 0, 00:24:25.580 "tls_version": 0, 00:24:25.580 "enable_ktls": false 00:24:25.580 } 00:24:25.580 } 00:24:25.580 ] 00:24:25.580 }, 00:24:25.580 { 00:24:25.580 "subsystem": "vmd", 00:24:25.580 "config": [] 00:24:25.580 }, 00:24:25.580 { 00:24:25.580 "subsystem": "accel", 00:24:25.580 "config": [ 00:24:25.580 { 00:24:25.580 "method": "accel_set_options", 00:24:25.580 "params": { 00:24:25.580 "small_cache_size": 128, 00:24:25.580 "large_cache_size": 16, 00:24:25.580 "task_count": 2048, 00:24:25.580 "sequence_count": 2048, 00:24:25.580 "buf_count": 2048 00:24:25.580 } 00:24:25.580 } 00:24:25.580 ] 00:24:25.580 }, 00:24:25.580 { 00:24:25.580 "subsystem": "bdev", 00:24:25.580 "config": [ 00:24:25.580 { 00:24:25.580 "method": "bdev_set_options", 00:24:25.580 "params": { 00:24:25.580 "bdev_io_pool_size": 65535, 00:24:25.580 "bdev_io_cache_size": 256, 00:24:25.580 "bdev_auto_examine": true, 00:24:25.580 "iobuf_small_cache_size": 128, 00:24:25.580 "iobuf_large_cache_size": 16 00:24:25.580 } 00:24:25.580 }, 00:24:25.581 { 00:24:25.581 "method": "bdev_raid_set_options", 00:24:25.581 "params": { 00:24:25.581 "process_window_size_kb": 1024, 00:24:25.581 "process_max_bandwidth_mb_sec": 0 00:24:25.581 } 00:24:25.581 }, 00:24:25.581 { 00:24:25.581 "method": "bdev_iscsi_set_options", 00:24:25.581 "params": { 00:24:25.581 "timeout_sec": 30 00:24:25.581 } 00:24:25.581 }, 00:24:25.581 { 00:24:25.581 "method": "bdev_nvme_set_options", 00:24:25.581 "params": { 00:24:25.581 "action_on_timeout": "none", 00:24:25.581 "timeout_us": 0, 00:24:25.581 "timeout_admin_us": 0, 00:24:25.581 "keep_alive_timeout_ms": 10000, 00:24:25.581 "arbitration_burst": 0, 00:24:25.581 "low_priority_weight": 0, 00:24:25.581 "medium_priority_weight": 0, 00:24:25.581 "high_priority_weight": 0, 00:24:25.581 "nvme_adminq_poll_period_us": 10000, 00:24:25.581 "nvme_ioq_poll_period_us": 0, 00:24:25.581 "io_queue_requests": 512, 00:24:25.581 "delay_cmd_submit": true, 00:24:25.581 "transport_retry_count": 4, 00:24:25.581 "bdev_retry_count": 3, 00:24:25.581 "transport_ack_timeout": 0, 00:24:25.581 "ctrlr_loss_timeout_sec": 0, 00:24:25.581 "reconnect_delay_sec": 0, 00:24:25.581 "fast_io_fail_timeout_sec": 0, 00:24:25.581 "disable_auto_failback": false, 00:24:25.581 "generate_uuids": false, 00:24:25.581 "transport_tos": 0, 00:24:25.581 "nvme_error_stat": false, 00:24:25.581 "rdma_srq_size": 0, 00:24:25.581 "io_path_stat": false, 00:24:25.581 "allow_accel_sequence": false, 00:24:25.581 "rdma_max_cq_size": 0, 00:24:25.581 "rdma_cm_event_timeout_ms": 0, 00:24:25.581 "dhchap_digests": [ 00:24:25.581 "sha256", 00:24:25.581 "sha384", 00:24:25.581 "sha512" 00:24:25.581 ], 00:24:25.581 "dhchap_dhgroups": [ 00:24:25.581 "null", 00:24:25.581 "ffdhe2048", 00:24:25.581 "ffdhe3072", 00:24:25.581 "ffdhe4096", 00:24:25.581 "ffdhe6144", 00:24:25.581 "ffdhe8192" 00:24:25.581 ] 00:24:25.581 } 00:24:25.581 }, 00:24:25.581 { 00:24:25.581 "method": "bdev_nvme_attach_controller", 00:24:25.581 "params": { 00:24:25.581 "name": "TLSTEST", 00:24:25.581 "trtype": "TCP", 00:24:25.581 "adrfam": "IPv4", 00:24:25.581 "traddr": "10.0.0.2", 00:24:25.581 "trsvcid": "4420", 00:24:25.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.581 "prchk_reftag": false, 00:24:25.581 "prchk_guard": false, 00:24:25.581 "ctrlr_loss_timeout_sec": 0, 00:24:25.581 "reconnect_delay_sec": 0, 00:24:25.581 "fast_io_fail_timeout_sec": 0, 00:24:25.581 "psk": "key0", 00:24:25.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.581 "hdgst": false, 00:24:25.581 "ddgst": false, 00:24:25.581 "multipath": "multipath" 00:24:25.581 } 00:24:25.581 }, 00:24:25.581 { 00:24:25.581 "method": "bdev_nvme_set_hotplug", 00:24:25.581 "params": { 00:24:25.581 "period_us": 100000, 00:24:25.581 "enable": false 00:24:25.581 } 00:24:25.581 }, 00:24:25.581 { 00:24:25.581 "method": "bdev_wait_for_examine" 00:24:25.581 } 00:24:25.581 ] 00:24:25.581 }, 00:24:25.581 { 00:24:25.581 "subsystem": "nbd", 00:24:25.581 "config": [] 00:24:25.581 } 00:24:25.581 ] 00:24:25.581 }' 00:24:25.581 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.581 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.581 [2024-10-08 18:34:53.957343] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:24:25.581 [2024-10-08 18:34:53.957446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243877 ] 00:24:25.581 [2024-10-08 18:34:54.063874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.845 [2024-10-08 18:34:54.279894] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.189 [2024-10-08 18:34:54.539117] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:27.127 18:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:27.127 18:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:27.127 18:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:27.127 Running I/O for 10 seconds... 00:24:29.082 1506.00 IOPS, 5.88 MiB/s [2024-10-08T16:34:58.998Z] 1515.50 IOPS, 5.92 MiB/s [2024-10-08T16:34:59.939Z] 1497.67 IOPS, 5.85 MiB/s [2024-10-08T16:35:00.878Z] 1490.50 IOPS, 5.82 MiB/s [2024-10-08T16:35:01.815Z] 1507.80 IOPS, 5.89 MiB/s [2024-10-08T16:35:02.755Z] 1508.00 IOPS, 5.89 MiB/s [2024-10-08T16:35:03.693Z] 1504.71 IOPS, 5.88 MiB/s [2024-10-08T16:35:05.073Z] 1509.12 IOPS, 5.90 MiB/s [2024-10-08T16:35:05.644Z] 1516.56 IOPS, 5.92 MiB/s [2024-10-08T16:35:05.902Z] 1512.40 IOPS, 5.91 MiB/s 00:24:37.365 Latency(us) 00:24:37.365 [2024-10-08T16:35:05.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.365 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:37.365 Verification LBA range: start 0x0 length 0x2000 00:24:37.365 TLSTESTn1 : 10.04 1518.27 5.93 0.00 0.00 84077.63 14078.10 67574.90 00:24:37.365 [2024-10-08T16:35:05.902Z] =================================================================================================================== 00:24:37.365 [2024-10-08T16:35:05.902Z] Total : 1518.27 5.93 0.00 0.00 84077.63 14078.10 67574.90 00:24:37.365 { 00:24:37.365 "results": [ 00:24:37.365 { 00:24:37.365 "job": "TLSTESTn1", 00:24:37.365 "core_mask": "0x4", 00:24:37.365 "workload": "verify", 00:24:37.365 "status": "finished", 00:24:37.365 "verify_range": { 00:24:37.365 "start": 0, 00:24:37.365 "length": 8192 00:24:37.365 }, 00:24:37.365 "queue_depth": 128, 00:24:37.365 "io_size": 4096, 00:24:37.365 "runtime": 10.04498, 00:24:37.365 "iops": 1518.27081786126, 00:24:37.365 "mibps": 5.930745382270547, 00:24:37.365 "io_failed": 0, 00:24:37.365 "io_timeout": 0, 00:24:37.365 "avg_latency_us": 84077.6251487577, 00:24:37.365 "min_latency_us": 14078.103703703704, 00:24:37.365 "max_latency_us": 67574.89777777778 00:24:37.365 } 00:24:37.365 ], 00:24:37.365 "core_count": 1 00:24:37.365 } 00:24:37.365 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:37.365 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1243877 00:24:37.365 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1243877 ']' 00:24:37.365 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1243877 00:24:37.365 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:37.365 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:37.365 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1243877 00:24:37.365 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:37.365 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:37.365 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1243877' 00:24:37.365 killing process with pid 1243877 00:24:37.365 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1243877 00:24:37.365 Received shutdown signal, test time was about 10.000000 seconds 00:24:37.365 00:24:37.365 Latency(us) 00:24:37.365 [2024-10-08T16:35:05.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.365 [2024-10-08T16:35:05.902Z] =================================================================================================================== 00:24:37.365 [2024-10-08T16:35:05.902Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.365 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1243877 00:24:37.625 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1243723 00:24:37.625 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1243723 ']' 00:24:37.625 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1243723 00:24:37.625 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:37.625 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:37.625 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1243723 00:24:37.885 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:37.885 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:37.886 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1243723' 00:24:37.886 killing process with pid 1243723 00:24:37.886 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1243723 00:24:37.886 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1243723 00:24:38.146 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:38.146 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:38.146 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.146 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.146 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1245208 00:24:38.146 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:38.146 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1245208 00:24:38.146 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1245208 ']' 00:24:38.146 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.146 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.146 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.146 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.146 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.406 [2024-10-08 18:35:06.718680] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:24:38.406 [2024-10-08 18:35:06.718789] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.406 [2024-10-08 18:35:06.818772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.666 [2024-10-08 18:35:07.024937] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.666 [2024-10-08 18:35:07.025068] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.666 [2024-10-08 18:35:07.025105] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.666 [2024-10-08 18:35:07.025135] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.666 [2024-10-08 18:35:07.025161] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.666 [2024-10-08 18:35:07.026533] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.605 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:39.605 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:39.605 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:39.605 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.605 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.605 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.605 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.zTtb9KJrKD 00:24:39.605 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zTtb9KJrKD 00:24:39.605 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:40.173 [2024-10-08 18:35:08.452450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.173 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:40.742 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:41.310 [2024-10-08 18:35:09.789739] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:41.310 [2024-10-08 18:35:09.790241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.310 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:42.248 malloc0 00:24:42.248 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:42.505 18:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zTtb9KJrKD 00:24:42.764 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:43.704 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1245876 00:24:43.704 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:43.704 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:43.704 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1245876 /var/tmp/bdevperf.sock 00:24:43.704 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1245876 ']' 00:24:43.704 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.704 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:43.704 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.704 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:43.704 18:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.704 [2024-10-08 18:35:12.020956] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:24:43.704 [2024-10-08 18:35:12.021060] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245876 ] 00:24:43.704 [2024-10-08 18:35:12.127827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.964 [2024-10-08 18:35:12.336842] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.224 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:44.224 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:44.224 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zTtb9KJrKD 00:24:44.484 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:45.421 [2024-10-08 18:35:13.643320] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:45.421 nvme0n1 00:24:45.421 18:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:45.421 Running I/O for 1 seconds... 00:24:46.801 1547.00 IOPS, 6.04 MiB/s 00:24:46.801 Latency(us) 00:24:46.801 [2024-10-08T16:35:15.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.801 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:46.801 Verification LBA range: start 0x0 length 0x2000 00:24:46.801 nvme0n1 : 1.04 1610.63 6.29 0.00 0.00 78163.63 12524.66 63691.28 00:24:46.801 [2024-10-08T16:35:15.338Z] =================================================================================================================== 00:24:46.801 [2024-10-08T16:35:15.338Z] Total : 1610.63 6.29 0.00 0.00 78163.63 12524.66 63691.28 00:24:46.801 { 00:24:46.801 "results": [ 00:24:46.801 { 00:24:46.801 "job": "nvme0n1", 00:24:46.801 "core_mask": "0x2", 00:24:46.801 "workload": "verify", 00:24:46.801 "status": "finished", 00:24:46.801 "verify_range": { 00:24:46.801 "start": 0, 00:24:46.801 "length": 8192 00:24:46.801 }, 00:24:46.801 "queue_depth": 128, 00:24:46.801 "io_size": 4096, 00:24:46.801 "runtime": 1.039964, 00:24:46.801 "iops": 1610.632675746468, 00:24:46.801 "mibps": 6.291533889634641, 00:24:46.801 "io_failed": 0, 00:24:46.801 "io_timeout": 0, 00:24:46.801 "avg_latency_us": 78163.6315347706, 00:24:46.801 "min_latency_us": 12524.657777777778, 00:24:46.801 "max_latency_us": 63691.28296296296 00:24:46.801 } 00:24:46.801 ], 00:24:46.801 "core_count": 1 00:24:46.801 } 00:24:46.801 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1245876 00:24:46.801 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1245876 ']' 00:24:46.801 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1245876 00:24:46.801 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:46.801 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:46.801 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1245876 00:24:46.801 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:46.801 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:46.801 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1245876' 00:24:46.801 killing process with pid 1245876 00:24:46.801 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1245876 00:24:46.801 Received shutdown signal, test time was about 1.000000 seconds 00:24:46.801 00:24:46.801 Latency(us) 00:24:46.801 [2024-10-08T16:35:15.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.801 [2024-10-08T16:35:15.338Z] =================================================================================================================== 00:24:46.801 [2024-10-08T16:35:15.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.801 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1245876 00:24:47.059 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1245208 00:24:47.059 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1245208 ']' 00:24:47.059 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1245208 00:24:47.059 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:47.059 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:47.059 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1245208 00:24:47.059 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:47.059 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:47.059 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1245208' 00:24:47.059 killing process with pid 1245208 00:24:47.059 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1245208 00:24:47.059 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1245208 00:24:47.629 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:47.629 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:47.629 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:47.629 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.629 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1246292 00:24:47.629 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:47.629 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1246292 00:24:47.629 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1246292 ']' 00:24:47.629 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.629 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:47.629 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.629 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:47.629 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.629 [2024-10-08 18:35:16.103567] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:24:47.629 [2024-10-08 18:35:16.103789] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.887 [2024-10-08 18:35:16.240036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.887 [2024-10-08 18:35:16.382851] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.887 [2024-10-08 18:35:16.382928] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.887 [2024-10-08 18:35:16.382949] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.887 [2024-10-08 18:35:16.382966] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.887 [2024-10-08 18:35:16.382982] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.887 [2024-10-08 18:35:16.383792] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.146 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:48.146 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:48.146 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:48.146 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:48.146 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.146 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.146 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:48.146 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.146 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.146 [2024-10-08 18:35:16.603827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.146 malloc0 00:24:48.146 [2024-10-08 18:35:16.653864] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:48.146 [2024-10-08 18:35:16.654338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.146 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.146 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1246435 00:24:48.146 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:48.406 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1246435 /var/tmp/bdevperf.sock 00:24:48.406 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1246435 ']' 00:24:48.406 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.406 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:48.406 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.406 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:48.406 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.406 [2024-10-08 18:35:16.746749] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:24:48.406 [2024-10-08 18:35:16.746836] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246435 ] 00:24:48.406 [2024-10-08 18:35:16.859332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.667 [2024-10-08 18:35:17.089676] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.928 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:48.928 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:48.928 18:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zTtb9KJrKD 00:24:49.496 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:50.066 [2024-10-08 18:35:18.322075] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:50.066 nvme0n1 00:24:50.066 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:50.066 Running I/O for 1 seconds... 00:24:51.447 1565.00 IOPS, 6.11 MiB/s 00:24:51.447 Latency(us) 00:24:51.447 [2024-10-08T16:35:19.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.448 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:51.448 Verification LBA range: start 0x0 length 0x2000 00:24:51.448 nvme0n1 : 1.04 1631.04 6.37 0.00 0.00 77140.34 6553.60 53593.88 00:24:51.448 [2024-10-08T16:35:19.985Z] =================================================================================================================== 00:24:51.448 [2024-10-08T16:35:19.985Z] Total : 1631.04 6.37 0.00 0.00 77140.34 6553.60 53593.88 00:24:51.448 { 00:24:51.448 "results": [ 00:24:51.448 { 00:24:51.448 "job": "nvme0n1", 00:24:51.448 "core_mask": "0x2", 00:24:51.448 "workload": "verify", 00:24:51.448 "status": "finished", 00:24:51.448 "verify_range": { 00:24:51.448 "start": 0, 00:24:51.448 "length": 8192 00:24:51.448 }, 00:24:51.448 "queue_depth": 128, 00:24:51.448 "io_size": 4096, 00:24:51.448 "runtime": 1.037985, 00:24:51.448 "iops": 1631.044764616059, 00:24:51.448 "mibps": 6.37126861178148, 00:24:51.448 "io_failed": 0, 00:24:51.448 "io_timeout": 0, 00:24:51.448 "avg_latency_us": 77140.33511758657, 00:24:51.448 "min_latency_us": 6553.6, 00:24:51.448 "max_latency_us": 53593.88444444445 00:24:51.448 } 00:24:51.448 ], 00:24:51.448 "core_count": 1 00:24:51.448 } 00:24:51.448 18:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:51.448 18:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.448 18:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.448 18:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.448 18:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:51.448 "subsystems": [ 00:24:51.448 { 00:24:51.448 "subsystem": "keyring", 00:24:51.448 "config": [ 00:24:51.448 { 00:24:51.448 "method": "keyring_file_add_key", 00:24:51.448 "params": { 00:24:51.448 "name": "key0", 00:24:51.448 "path": "/tmp/tmp.zTtb9KJrKD" 00:24:51.448 } 00:24:51.448 } 00:24:51.448 ] 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "subsystem": "iobuf", 00:24:51.448 "config": [ 00:24:51.448 { 00:24:51.448 "method": "iobuf_set_options", 00:24:51.448 "params": { 00:24:51.448 "small_pool_count": 8192, 00:24:51.448 "large_pool_count": 1024, 00:24:51.448 "small_bufsize": 8192, 00:24:51.448 "large_bufsize": 135168 00:24:51.448 } 00:24:51.448 } 00:24:51.448 ] 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "subsystem": "sock", 00:24:51.448 "config": [ 00:24:51.448 { 00:24:51.448 "method": "sock_set_default_impl", 00:24:51.448 "params": { 00:24:51.448 "impl_name": "posix" 00:24:51.448 } 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "method": "sock_impl_set_options", 00:24:51.448 "params": { 00:24:51.448 "impl_name": "ssl", 00:24:51.448 "recv_buf_size": 4096, 00:24:51.448 "send_buf_size": 4096, 00:24:51.448 "enable_recv_pipe": true, 00:24:51.448 "enable_quickack": false, 00:24:51.448 "enable_placement_id": 0, 00:24:51.448 "enable_zerocopy_send_server": true, 00:24:51.448 "enable_zerocopy_send_client": false, 00:24:51.448 "zerocopy_threshold": 0, 00:24:51.448 "tls_version": 0, 00:24:51.448 "enable_ktls": false 00:24:51.448 } 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "method": "sock_impl_set_options", 00:24:51.448 "params": { 00:24:51.448 "impl_name": "posix", 00:24:51.448 "recv_buf_size": 2097152, 00:24:51.448 "send_buf_size": 2097152, 00:24:51.448 "enable_recv_pipe": true, 00:24:51.448 "enable_quickack": false, 00:24:51.448 "enable_placement_id": 0, 00:24:51.448 "enable_zerocopy_send_server": true, 00:24:51.448 "enable_zerocopy_send_client": false, 00:24:51.448 "zerocopy_threshold": 0, 00:24:51.448 "tls_version": 0, 00:24:51.448 "enable_ktls": false 00:24:51.448 } 00:24:51.448 } 00:24:51.448 ] 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "subsystem": "vmd", 00:24:51.448 "config": [] 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "subsystem": "accel", 00:24:51.448 "config": [ 00:24:51.448 { 00:24:51.448 "method": "accel_set_options", 00:24:51.448 "params": { 00:24:51.448 "small_cache_size": 128, 00:24:51.448 "large_cache_size": 16, 00:24:51.448 "task_count": 2048, 00:24:51.448 "sequence_count": 2048, 00:24:51.448 "buf_count": 2048 00:24:51.448 } 00:24:51.448 } 00:24:51.448 ] 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "subsystem": "bdev", 00:24:51.448 "config": [ 00:24:51.448 { 00:24:51.448 "method": "bdev_set_options", 00:24:51.448 "params": { 00:24:51.448 "bdev_io_pool_size": 65535, 00:24:51.448 "bdev_io_cache_size": 256, 00:24:51.448 "bdev_auto_examine": true, 00:24:51.448 "iobuf_small_cache_size": 128, 00:24:51.448 "iobuf_large_cache_size": 16 00:24:51.448 } 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "method": "bdev_raid_set_options", 00:24:51.448 "params": { 00:24:51.448 "process_window_size_kb": 1024, 00:24:51.448 "process_max_bandwidth_mb_sec": 0 00:24:51.448 } 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "method": "bdev_iscsi_set_options", 00:24:51.448 "params": { 00:24:51.448 "timeout_sec": 30 00:24:51.448 } 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "method": "bdev_nvme_set_options", 00:24:51.448 "params": { 00:24:51.448 "action_on_timeout": "none", 00:24:51.448 "timeout_us": 0, 00:24:51.448 "timeout_admin_us": 0, 00:24:51.448 "keep_alive_timeout_ms": 10000, 00:24:51.448 "arbitration_burst": 0, 00:24:51.448 "low_priority_weight": 0, 00:24:51.448 "medium_priority_weight": 0, 00:24:51.448 "high_priority_weight": 0, 00:24:51.448 "nvme_adminq_poll_period_us": 10000, 00:24:51.448 "nvme_ioq_poll_period_us": 0, 00:24:51.448 "io_queue_requests": 0, 00:24:51.448 "delay_cmd_submit": true, 00:24:51.448 "transport_retry_count": 4, 00:24:51.448 "bdev_retry_count": 3, 00:24:51.448 "transport_ack_timeout": 0, 00:24:51.448 "ctrlr_loss_timeout_sec": 0, 00:24:51.448 "reconnect_delay_sec": 0, 00:24:51.448 "fast_io_fail_timeout_sec": 0, 00:24:51.448 "disable_auto_failback": false, 00:24:51.448 "generate_uuids": false, 00:24:51.448 "transport_tos": 0, 00:24:51.448 "nvme_error_stat": false, 00:24:51.448 "rdma_srq_size": 0, 00:24:51.448 "io_path_stat": false, 00:24:51.448 "allow_accel_sequence": false, 00:24:51.448 "rdma_max_cq_size": 0, 00:24:51.448 "rdma_cm_event_timeout_ms": 0, 00:24:51.448 "dhchap_digests": [ 00:24:51.448 "sha256", 00:24:51.448 "sha384", 00:24:51.448 "sha512" 00:24:51.448 ], 00:24:51.448 "dhchap_dhgroups": [ 00:24:51.448 "null", 00:24:51.448 "ffdhe2048", 00:24:51.448 "ffdhe3072", 00:24:51.448 "ffdhe4096", 00:24:51.448 "ffdhe6144", 00:24:51.448 "ffdhe8192" 00:24:51.448 ] 00:24:51.448 } 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "method": "bdev_nvme_set_hotplug", 00:24:51.448 "params": { 00:24:51.448 "period_us": 100000, 00:24:51.448 "enable": false 00:24:51.448 } 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "method": "bdev_malloc_create", 00:24:51.448 "params": { 00:24:51.448 "name": "malloc0", 00:24:51.448 "num_blocks": 8192, 00:24:51.448 "block_size": 4096, 00:24:51.448 "physical_block_size": 4096, 00:24:51.448 "uuid": "48335085-7ad6-4c81-b190-99dd4c57f514", 00:24:51.448 "optimal_io_boundary": 0, 00:24:51.448 "md_size": 0, 00:24:51.448 "dif_type": 0, 00:24:51.448 "dif_is_head_of_md": false, 00:24:51.448 "dif_pi_format": 0 00:24:51.448 } 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "method": "bdev_wait_for_examine" 00:24:51.448 } 00:24:51.448 ] 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "subsystem": "nbd", 00:24:51.448 "config": [] 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "subsystem": "scheduler", 00:24:51.448 "config": [ 00:24:51.448 { 00:24:51.448 "method": "framework_set_scheduler", 00:24:51.448 "params": { 00:24:51.448 "name": "static" 00:24:51.448 } 00:24:51.448 } 00:24:51.448 ] 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "subsystem": "nvmf", 00:24:51.448 "config": [ 00:24:51.448 { 00:24:51.448 "method": "nvmf_set_config", 00:24:51.448 "params": { 00:24:51.448 "discovery_filter": "match_any", 00:24:51.448 "admin_cmd_passthru": { 00:24:51.448 "identify_ctrlr": false 00:24:51.448 }, 00:24:51.448 "dhchap_digests": [ 00:24:51.448 "sha256", 00:24:51.448 "sha384", 00:24:51.448 "sha512" 00:24:51.448 ], 00:24:51.448 "dhchap_dhgroups": [ 00:24:51.448 "null", 00:24:51.448 "ffdhe2048", 00:24:51.448 "ffdhe3072", 00:24:51.448 "ffdhe4096", 00:24:51.448 "ffdhe6144", 00:24:51.448 "ffdhe8192" 00:24:51.448 ] 00:24:51.448 } 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "method": "nvmf_set_max_subsystems", 00:24:51.448 "params": { 00:24:51.448 "max_subsystems": 1024 00:24:51.448 } 00:24:51.448 }, 00:24:51.448 { 00:24:51.448 "method": "nvmf_set_crdt", 00:24:51.449 "params": { 00:24:51.449 "crdt1": 0, 00:24:51.449 "crdt2": 0, 00:24:51.449 "crdt3": 0 00:24:51.449 } 00:24:51.449 }, 00:24:51.449 { 00:24:51.449 "method": "nvmf_create_transport", 00:24:51.449 "params": { 00:24:51.449 "trtype": "TCP", 00:24:51.449 "max_queue_depth": 128, 00:24:51.449 "max_io_qpairs_per_ctrlr": 127, 00:24:51.449 "in_capsule_data_size": 4096, 00:24:51.449 "max_io_size": 131072, 00:24:51.449 "io_unit_size": 131072, 00:24:51.449 "max_aq_depth": 128, 00:24:51.449 "num_shared_buffers": 511, 00:24:51.449 "buf_cache_size": 4294967295, 00:24:51.449 "dif_insert_or_strip": false, 00:24:51.449 "zcopy": false, 00:24:51.449 "c2h_success": false, 00:24:51.449 "sock_priority": 0, 00:24:51.449 "abort_timeout_sec": 1, 00:24:51.449 "ack_timeout": 0, 00:24:51.449 "data_wr_pool_size": 0 00:24:51.449 } 00:24:51.449 }, 00:24:51.449 { 00:24:51.449 "method": "nvmf_create_subsystem", 00:24:51.449 "params": { 00:24:51.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.449 "allow_any_host": false, 00:24:51.449 "serial_number": "00000000000000000000", 00:24:51.449 "model_number": "SPDK bdev Controller", 00:24:51.449 "max_namespaces": 32, 00:24:51.449 "min_cntlid": 1, 00:24:51.449 "max_cntlid": 65519, 00:24:51.449 "ana_reporting": false 00:24:51.449 } 00:24:51.449 }, 00:24:51.449 { 00:24:51.449 "method": "nvmf_subsystem_add_host", 00:24:51.449 "params": { 00:24:51.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.449 "host": "nqn.2016-06.io.spdk:host1", 00:24:51.449 "psk": "key0" 00:24:51.449 } 00:24:51.449 }, 00:24:51.449 { 00:24:51.449 "method": "nvmf_subsystem_add_ns", 00:24:51.449 "params": { 00:24:51.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.449 "namespace": { 00:24:51.449 "nsid": 1, 00:24:51.449 "bdev_name": "malloc0", 00:24:51.449 "nguid": "483350857AD64C81B19099DD4C57F514", 00:24:51.449 "uuid": "48335085-7ad6-4c81-b190-99dd4c57f514", 00:24:51.449 "no_auto_visible": false 00:24:51.449 } 00:24:51.449 } 00:24:51.449 }, 00:24:51.449 { 00:24:51.449 "method": "nvmf_subsystem_add_listener", 00:24:51.449 "params": { 00:24:51.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.449 "listen_address": { 00:24:51.449 "trtype": "TCP", 00:24:51.449 "adrfam": "IPv4", 00:24:51.449 "traddr": "10.0.0.2", 00:24:51.449 "trsvcid": "4420" 00:24:51.449 }, 00:24:51.449 "secure_channel": false, 00:24:51.449 "sock_impl": "ssl" 00:24:51.449 } 00:24:51.449 } 00:24:51.449 ] 00:24:51.449 } 00:24:51.449 ] 00:24:51.449 }' 00:24:51.449 18:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:51.709 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:51.709 "subsystems": [ 00:24:51.709 { 00:24:51.709 "subsystem": "keyring", 00:24:51.709 "config": [ 00:24:51.709 { 00:24:51.709 "method": "keyring_file_add_key", 00:24:51.709 "params": { 00:24:51.709 "name": "key0", 00:24:51.709 "path": "/tmp/tmp.zTtb9KJrKD" 00:24:51.709 } 00:24:51.709 } 00:24:51.709 ] 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "subsystem": "iobuf", 00:24:51.710 "config": [ 00:24:51.710 { 00:24:51.710 "method": "iobuf_set_options", 00:24:51.710 "params": { 00:24:51.710 "small_pool_count": 8192, 00:24:51.710 "large_pool_count": 1024, 00:24:51.710 "small_bufsize": 8192, 00:24:51.710 "large_bufsize": 135168 00:24:51.710 } 00:24:51.710 } 00:24:51.710 ] 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "subsystem": "sock", 00:24:51.710 "config": [ 00:24:51.710 { 00:24:51.710 "method": "sock_set_default_impl", 00:24:51.710 "params": { 00:24:51.710 "impl_name": "posix" 00:24:51.710 } 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "method": "sock_impl_set_options", 00:24:51.710 "params": { 00:24:51.710 "impl_name": "ssl", 00:24:51.710 "recv_buf_size": 4096, 00:24:51.710 "send_buf_size": 4096, 00:24:51.710 "enable_recv_pipe": true, 00:24:51.710 "enable_quickack": false, 00:24:51.710 "enable_placement_id": 0, 00:24:51.710 "enable_zerocopy_send_server": true, 00:24:51.710 "enable_zerocopy_send_client": false, 00:24:51.710 "zerocopy_threshold": 0, 00:24:51.710 "tls_version": 0, 00:24:51.710 "enable_ktls": false 00:24:51.710 } 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "method": "sock_impl_set_options", 00:24:51.710 "params": { 00:24:51.710 "impl_name": "posix", 00:24:51.710 "recv_buf_size": 2097152, 00:24:51.710 "send_buf_size": 2097152, 00:24:51.710 "enable_recv_pipe": true, 00:24:51.710 "enable_quickack": false, 00:24:51.710 "enable_placement_id": 0, 00:24:51.710 "enable_zerocopy_send_server": true, 00:24:51.710 "enable_zerocopy_send_client": false, 00:24:51.710 "zerocopy_threshold": 0, 00:24:51.710 "tls_version": 0, 00:24:51.710 "enable_ktls": false 00:24:51.710 } 00:24:51.710 } 00:24:51.710 ] 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "subsystem": "vmd", 00:24:51.710 "config": [] 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "subsystem": "accel", 00:24:51.710 "config": [ 00:24:51.710 { 00:24:51.710 "method": "accel_set_options", 00:24:51.710 "params": { 00:24:51.710 "small_cache_size": 128, 00:24:51.710 "large_cache_size": 16, 00:24:51.710 "task_count": 2048, 00:24:51.710 "sequence_count": 2048, 00:24:51.710 "buf_count": 2048 00:24:51.710 } 00:24:51.710 } 00:24:51.710 ] 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "subsystem": "bdev", 00:24:51.710 "config": [ 00:24:51.710 { 00:24:51.710 "method": "bdev_set_options", 00:24:51.710 "params": { 00:24:51.710 "bdev_io_pool_size": 65535, 00:24:51.710 "bdev_io_cache_size": 256, 00:24:51.710 "bdev_auto_examine": true, 00:24:51.710 "iobuf_small_cache_size": 128, 00:24:51.710 "iobuf_large_cache_size": 16 00:24:51.710 } 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "method": "bdev_raid_set_options", 00:24:51.710 "params": { 00:24:51.710 "process_window_size_kb": 1024, 00:24:51.710 "process_max_bandwidth_mb_sec": 0 00:24:51.710 } 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "method": "bdev_iscsi_set_options", 00:24:51.710 "params": { 00:24:51.710 "timeout_sec": 30 00:24:51.710 } 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "method": "bdev_nvme_set_options", 00:24:51.710 "params": { 00:24:51.710 "action_on_timeout": "none", 00:24:51.710 "timeout_us": 0, 00:24:51.710 "timeout_admin_us": 0, 00:24:51.710 "keep_alive_timeout_ms": 10000, 00:24:51.710 "arbitration_burst": 0, 00:24:51.710 "low_priority_weight": 0, 00:24:51.710 "medium_priority_weight": 0, 00:24:51.710 "high_priority_weight": 0, 00:24:51.710 "nvme_adminq_poll_period_us": 10000, 00:24:51.710 "nvme_ioq_poll_period_us": 0, 00:24:51.710 "io_queue_requests": 512, 00:24:51.710 "delay_cmd_submit": true, 00:24:51.710 "transport_retry_count": 4, 00:24:51.710 "bdev_retry_count": 3, 00:24:51.710 "transport_ack_timeout": 0, 00:24:51.710 "ctrlr_loss_timeout_sec": 0, 00:24:51.710 "reconnect_delay_sec": 0, 00:24:51.710 "fast_io_fail_timeout_sec": 0, 00:24:51.710 "disable_auto_failback": false, 00:24:51.710 "generate_uuids": false, 00:24:51.710 "transport_tos": 0, 00:24:51.710 "nvme_error_stat": false, 00:24:51.710 "rdma_srq_size": 0, 00:24:51.710 "io_path_stat": false, 00:24:51.710 "allow_accel_sequence": false, 00:24:51.710 "rdma_max_cq_size": 0, 00:24:51.710 "rdma_cm_event_timeout_ms": 0, 00:24:51.710 "dhchap_digests": [ 00:24:51.710 "sha256", 00:24:51.710 "sha384", 00:24:51.710 "sha512" 00:24:51.710 ], 00:24:51.710 "dhchap_dhgroups": [ 00:24:51.710 "null", 00:24:51.710 "ffdhe2048", 00:24:51.710 "ffdhe3072", 00:24:51.710 "ffdhe4096", 00:24:51.710 "ffdhe6144", 00:24:51.710 "ffdhe8192" 00:24:51.710 ] 00:24:51.710 } 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "method": "bdev_nvme_attach_controller", 00:24:51.710 "params": { 00:24:51.710 "name": "nvme0", 00:24:51.710 "trtype": "TCP", 00:24:51.710 "adrfam": "IPv4", 00:24:51.710 "traddr": "10.0.0.2", 00:24:51.710 "trsvcid": "4420", 00:24:51.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.710 "prchk_reftag": false, 00:24:51.710 "prchk_guard": false, 00:24:51.710 "ctrlr_loss_timeout_sec": 0, 00:24:51.710 "reconnect_delay_sec": 0, 00:24:51.710 "fast_io_fail_timeout_sec": 0, 00:24:51.710 "psk": "key0", 00:24:51.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:51.710 "hdgst": false, 00:24:51.710 "ddgst": false, 00:24:51.710 "multipath": "multipath" 00:24:51.710 } 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "method": "bdev_nvme_set_hotplug", 00:24:51.710 "params": { 00:24:51.710 "period_us": 100000, 00:24:51.710 "enable": false 00:24:51.710 } 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "method": "bdev_enable_histogram", 00:24:51.710 "params": { 00:24:51.710 "name": "nvme0n1", 00:24:51.710 "enable": true 00:24:51.710 } 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "method": "bdev_wait_for_examine" 00:24:51.710 } 00:24:51.710 ] 00:24:51.710 }, 00:24:51.710 { 00:24:51.710 "subsystem": "nbd", 00:24:51.710 "config": [] 00:24:51.710 } 00:24:51.710 ] 00:24:51.710 }' 00:24:51.710 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1246435 00:24:51.710 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1246435 ']' 00:24:51.710 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1246435 00:24:51.710 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:51.710 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:51.710 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1246435 00:24:51.970 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:51.970 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:51.970 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1246435' 00:24:51.970 killing process with pid 1246435 00:24:51.970 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1246435 00:24:51.970 Received shutdown signal, test time was about 1.000000 seconds 00:24:51.970 00:24:51.970 Latency(us) 00:24:51.970 [2024-10-08T16:35:20.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.970 [2024-10-08T16:35:20.507Z] =================================================================================================================== 00:24:51.970 [2024-10-08T16:35:20.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:51.970 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1246435 00:24:52.229 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1246292 00:24:52.229 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1246292 ']' 00:24:52.229 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1246292 00:24:52.229 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:52.229 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:52.229 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1246292 00:24:52.229 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:52.229 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:52.229 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1246292' 00:24:52.229 killing process with pid 1246292 00:24:52.229 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1246292 00:24:52.229 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1246292 00:24:52.798 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:52.798 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:52.798 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:52.798 "subsystems": [ 00:24:52.798 { 00:24:52.798 "subsystem": "keyring", 00:24:52.798 "config": [ 00:24:52.798 { 00:24:52.798 "method": "keyring_file_add_key", 00:24:52.798 "params": { 00:24:52.798 "name": "key0", 00:24:52.798 "path": "/tmp/tmp.zTtb9KJrKD" 00:24:52.798 } 00:24:52.798 } 00:24:52.798 ] 00:24:52.798 }, 00:24:52.798 { 00:24:52.798 "subsystem": "iobuf", 00:24:52.798 "config": [ 00:24:52.798 { 00:24:52.798 "method": "iobuf_set_options", 00:24:52.798 "params": { 00:24:52.798 "small_pool_count": 8192, 00:24:52.798 "large_pool_count": 1024, 00:24:52.798 "small_bufsize": 8192, 00:24:52.798 "large_bufsize": 135168 00:24:52.798 } 00:24:52.798 } 00:24:52.798 ] 00:24:52.798 }, 00:24:52.798 { 00:24:52.798 "subsystem": "sock", 00:24:52.798 "config": [ 00:24:52.798 { 00:24:52.798 "method": "sock_set_default_impl", 00:24:52.798 "params": { 00:24:52.798 "impl_name": "posix" 00:24:52.798 } 00:24:52.798 }, 00:24:52.798 { 00:24:52.798 "method": "sock_impl_set_options", 00:24:52.798 "params": { 00:24:52.798 "impl_name": "ssl", 00:24:52.798 "recv_buf_size": 4096, 00:24:52.798 "send_buf_size": 4096, 00:24:52.798 "enable_recv_pipe": true, 00:24:52.798 "enable_quickack": false, 00:24:52.798 "enable_placement_id": 0, 00:24:52.798 "enable_zerocopy_send_server": true, 00:24:52.798 "enable_zerocopy_send_client": false, 00:24:52.798 "zerocopy_threshold": 0, 00:24:52.798 "tls_version": 0, 00:24:52.798 "enable_ktls": false 00:24:52.798 } 00:24:52.798 }, 00:24:52.798 { 00:24:52.798 "method": "sock_impl_set_options", 00:24:52.798 "params": { 00:24:52.798 "impl_name": "posix", 00:24:52.798 "recv_buf_size": 2097152, 00:24:52.798 "send_buf_size": 2097152, 00:24:52.798 "enable_recv_pipe": true, 00:24:52.798 "enable_quickack": false, 00:24:52.798 "enable_placement_id": 0, 00:24:52.798 "enable_zerocopy_send_server": true, 00:24:52.798 "enable_zerocopy_send_client": false, 00:24:52.798 "zerocopy_threshold": 0, 00:24:52.798 "tls_version": 0, 00:24:52.798 "enable_ktls": false 00:24:52.798 } 00:24:52.798 } 00:24:52.798 ] 00:24:52.798 }, 00:24:52.798 { 00:24:52.798 "subsystem": "vmd", 00:24:52.798 "config": [] 00:24:52.798 }, 00:24:52.798 { 00:24:52.798 "subsystem": "accel", 00:24:52.798 "config": [ 00:24:52.798 { 00:24:52.799 "method": "accel_set_options", 00:24:52.799 "params": { 00:24:52.799 "small_cache_size": 128, 00:24:52.799 "large_cache_size": 16, 00:24:52.799 "task_count": 2048, 00:24:52.799 "sequence_count": 2048, 00:24:52.799 "buf_count": 2048 00:24:52.799 } 00:24:52.799 } 00:24:52.799 ] 00:24:52.799 }, 00:24:52.799 { 00:24:52.799 "subsystem": "bdev", 00:24:52.799 "config": [ 00:24:52.799 { 00:24:52.799 "method": "bdev_set_options", 00:24:52.799 "params": { 00:24:52.799 "bdev_io_pool_size": 65535, 00:24:52.799 "bdev_io_cache_size": 256, 00:24:52.799 "bdev_auto_examine": true, 00:24:52.799 "iobuf_small_cache_size": 128, 00:24:52.799 "iobuf_large_cache_size": 16 00:24:52.799 } 00:24:52.799 }, 00:24:52.799 { 00:24:52.799 "method": "bdev_raid_set_options", 00:24:52.799 "params": { 00:24:52.799 "process_window_size_kb": 1024, 00:24:52.799 "process_max_bandwidth_mb_sec": 0 00:24:52.799 } 00:24:52.799 }, 00:24:52.799 { 00:24:52.799 "method": "bdev_iscsi_set_options", 00:24:52.799 "params": { 00:24:52.799 "timeout_sec": 30 00:24:52.799 } 00:24:52.799 }, 00:24:52.799 { 00:24:52.799 "method": "bdev_nvme_set_options", 00:24:52.799 "params": { 00:24:52.799 "action_on_timeout": "none", 00:24:52.799 "timeout_us": 0, 00:24:52.799 "timeout_admin_us": 0, 00:24:52.799 "keep_alive_timeout_ms": 10000, 00:24:52.799 "arbitration_burst": 0, 00:24:52.799 "low_priority_weight": 0, 00:24:52.799 "medium_priority_weight": 0, 00:24:52.799 "high_priority_weight": 0, 00:24:52.799 "nvme_adminq_poll_period_us": 10000, 00:24:52.799 "nvme_ioq_poll_period_us": 0, 00:24:52.799 "io_queue_requests": 0, 00:24:52.799 "delay_cmd_submit": true, 00:24:52.799 "transport_retry_count": 4, 00:24:52.799 "bdev_retry_count": 3, 00:24:52.799 "transport_ack_timeout": 0, 00:24:52.799 "ctrlr_loss_timeout_sec": 0, 00:24:52.799 "reconnect_delay_sec": 0, 00:24:52.799 "fast_io_fail_timeout_sec": 0, 00:24:52.799 "disable_auto_failback": false, 00:24:52.799 "generate_uuids": false, 00:24:52.799 "transport_tos": 0, 00:24:52.799 "nvme_error_stat": false, 00:24:52.799 "rdma_srq_size": 0, 00:24:52.799 "io_path_stat": false, 00:24:52.799 "allow_accel_sequence": false, 00:24:52.799 "rdma_max_cq_size": 0, 00:24:52.799 "rdma_cm_event_timeout_ms": 0, 00:24:52.799 "dhchap_digests": [ 00:24:52.799 "sha256", 00:24:52.799 "sha384", 00:24:52.799 "sha512" 00:24:52.799 ], 00:24:52.799 "dhchap_dhgroups": [ 00:24:52.799 "null", 00:24:52.799 "ffdhe2048", 00:24:52.799 "ffdhe3072", 00:24:52.799 "ffdhe4096", 00:24:52.799 "ffdhe6144", 00:24:52.799 "ffdhe8192" 00:24:52.799 ] 00:24:52.799 } 00:24:52.799 }, 00:24:52.799 { 00:24:52.799 "method": "bdev_nvme_set_hotplug", 00:24:52.799 "params": { 00:24:52.799 "period_us": 100000, 00:24:52.799 "enable": false 00:24:52.799 } 00:24:52.799 }, 00:24:52.799 { 00:24:52.799 "method": "bdev_malloc_create", 00:24:52.799 "params": { 00:24:52.799 "name": "malloc0", 00:24:52.799 "num_blocks": 8192, 00:24:52.799 "block_size": 4096, 00:24:52.799 "physical_block_size": 4096, 00:24:52.799 "uuid": "48335085-7ad6-4c81-b190-99dd4c57f514", 00:24:52.799 "optimal_io_boundary": 0, 00:24:52.799 "md_size": 0, 00:24:52.799 "dif_type": 0, 00:24:52.799 "dif_is_head_of_md": false, 00:24:52.799 "dif_pi_format": 0 00:24:52.799 } 00:24:52.799 }, 00:24:52.799 { 00:24:52.799 "method": "bdev_wait_for_examine" 00:24:52.799 } 00:24:52.799 ] 00:24:52.799 }, 00:24:52.799 { 00:24:52.799 "subsystem": "nbd", 00:24:52.799 "config": [] 00:24:52.799 }, 00:24:52.799 { 00:24:52.799 "subsystem": "scheduler", 00:24:52.799 "config": [ 00:24:52.799 { 00:24:52.799 "method": "framework_set_scheduler", 00:24:52.799 "params": { 00:24:52.799 "name": "static" 00:24:52.799 } 00:24:52.799 } 00:24:52.799 ] 00:24:52.799 }, 00:24:52.799 { 00:24:52.799 "subsystem": "nvmf", 00:24:52.799 "config": [ 00:24:52.799 { 00:24:52.799 "method": "nvmf_set_config", 00:24:52.799 "params": { 00:24:52.799 "discovery_filter": "match_any", 00:24:52.799 "admin_cmd_passthru": { 00:24:52.799 "identify_ctrlr": false 00:24:52.799 }, 00:24:52.799 "dhchap_digests": [ 00:24:52.799 "sha256", 00:24:52.799 "sha384", 00:24:52.799 "sha512" 00:24:52.799 ], 00:24:52.799 "dhchap_dhgroups": [ 00:24:52.799 "null", 00:24:52.799 "ffdhe2048", 00:24:52.799 "ffdhe3072", 00:24:52.799 "ffdhe4096", 00:24:52.799 "ffdhe6144", 00:24:52.799 "ffdhe8192" 00:24:52.799 ] 00:24:52.799 } 00:24:52.799 }, 00:24:52.799 { 00:24:52.799 "method": "nvmf_set_max_subsystems", 00:24:52.799 "params": { 00:24:52.799 "max_subsystems": 1024 00:24:52.799 } 00:24:52.799 }, 00:24:52.799 { 00:24:52.799 "method": "nvmf_set_crdt", 00:24:52.799 "params": { 00:24:52.799 "crdt1": 0, 00:24:52.799 "crdt2": 0, 00:24:52.799 "crdt3": 0 00:24:52.799 } 00:24:52.799 }, 00:24:52.799 { 00:24:52.799 "method": "nvmf_create_transport", 00:24:52.799 "params": { 00:24:52.799 "trtype": "TCP", 00:24:52.799 "max_queue_depth": 128, 00:24:52.799 "max_io_qpairs_per_ctrlr": 127, 00:24:52.799 "in_capsule_data_size": 4096, 00:24:52.799 "max_io_size": 131072, 00:24:52.799 "io_unit_size": 131072, 00:24:52.799 "max_aq_depth": 128, 00:24:52.799 "num_shared_buffers": 511, 00:24:52.799 "buf_cache_size": 4294967295, 00:24:52.799 "dif_insert_or_strip": false, 00:24:52.799 "zcopy": false, 00:24:52.799 "c2h_success": false, 00:24:52.799 "sock_priority": 0, 00:24:52.799 "abort_timeout_sec": 1, 00:24:52.800 "ack_timeout": 0, 00:24:52.800 "data_wr_pool_size": 0 00:24:52.800 } 00:24:52.800 }, 00:24:52.800 { 00:24:52.800 "method": "nvmf_create_subsystem", 00:24:52.800 "params": { 00:24:52.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.800 "allow_any_host": false, 00:24:52.800 "serial_number": "00000000000000000000", 00:24:52.800 "model_number": "SPDK bdev Controller", 00:24:52.800 "max_namespaces": 32, 00:24:52.800 "min_cntlid": 1, 00:24:52.800 "max_cntlid": 65519, 00:24:52.800 "ana_reporting": false 00:24:52.800 } 00:24:52.800 }, 00:24:52.800 { 00:24:52.800 "method": "nvmf_subsystem_add_host", 00:24:52.800 "params": { 00:24:52.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.800 "host": "nqn.2016-06.io.spdk:host1", 00:24:52.800 "psk": "key0" 00:24:52.800 } 00:24:52.800 }, 00:24:52.800 { 00:24:52.800 "method": "nvmf_subsystem_add_ns", 00:24:52.800 "params": { 00:24:52.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.800 "namespace": { 00:24:52.800 "nsid": 1, 00:24:52.800 "bdev_name": "malloc0", 00:24:52.800 "nguid": "483350857AD64C81B19099DD4C57F514", 00:24:52.800 "uuid": "48335085-7ad6-4c81-b190-99dd4c57f514", 00:24:52.800 "no_auto_visible": false 00:24:52.800 } 00:24:52.800 } 00:24:52.800 }, 00:24:52.800 { 00:24:52.800 "method": "nvmf_subsystem_add_listener", 00:24:52.800 "params": { 00:24:52.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.800 "listen_address": { 00:24:52.800 "trtype": "TCP", 00:24:52.800 "adrfam": "IPv4", 00:24:52.800 "traddr": "10.0.0.2", 00:24:52.800 "trsvcid": "4420" 00:24:52.800 }, 00:24:52.800 "secure_channel": false, 00:24:52.800 "sock_impl": "ssl" 00:24:52.800 } 00:24:52.800 } 00:24:52.800 ] 00:24:52.800 } 00:24:52.800 ] 00:24:52.800 }' 00:24:52.800 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:52.800 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.800 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1246973 00:24:52.800 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:52.800 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1246973 00:24:52.800 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1246973 ']' 00:24:52.800 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.800 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:52.800 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.800 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:52.800 18:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.800 [2024-10-08 18:35:21.237045] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:24:52.800 [2024-10-08 18:35:21.237228] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.060 [2024-10-08 18:35:21.393409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.321 [2024-10-08 18:35:21.609126] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.321 [2024-10-08 18:35:21.609231] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.321 [2024-10-08 18:35:21.609267] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.321 [2024-10-08 18:35:21.609299] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.321 [2024-10-08 18:35:21.609326] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.321 [2024-10-08 18:35:21.610576] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.580 [2024-10-08 18:35:21.951317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.580 [2024-10-08 18:35:21.984031] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:53.580 [2024-10-08 18:35:21.984487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1247003 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1247003 /var/tmp/bdevperf.sock 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1247003 ']' 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.580 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:53.580 "subsystems": [ 00:24:53.580 { 00:24:53.580 "subsystem": "keyring", 00:24:53.580 "config": [ 00:24:53.580 { 00:24:53.580 "method": "keyring_file_add_key", 00:24:53.580 "params": { 00:24:53.580 "name": "key0", 00:24:53.580 "path": "/tmp/tmp.zTtb9KJrKD" 00:24:53.580 } 00:24:53.580 } 00:24:53.580 ] 00:24:53.580 }, 00:24:53.580 { 00:24:53.580 "subsystem": "iobuf", 00:24:53.580 "config": [ 00:24:53.580 { 00:24:53.580 "method": "iobuf_set_options", 00:24:53.580 "params": { 00:24:53.580 "small_pool_count": 8192, 00:24:53.580 "large_pool_count": 1024, 00:24:53.580 "small_bufsize": 8192, 00:24:53.580 "large_bufsize": 135168 00:24:53.580 } 00:24:53.580 } 00:24:53.580 ] 00:24:53.580 }, 00:24:53.580 { 00:24:53.580 "subsystem": "sock", 00:24:53.580 "config": [ 00:24:53.580 { 00:24:53.580 "method": "sock_set_default_impl", 00:24:53.580 "params": { 00:24:53.580 "impl_name": "posix" 00:24:53.580 } 00:24:53.580 }, 00:24:53.580 { 00:24:53.580 "method": "sock_impl_set_options", 00:24:53.580 "params": { 00:24:53.580 "impl_name": "ssl", 00:24:53.580 "recv_buf_size": 4096, 00:24:53.580 "send_buf_size": 4096, 00:24:53.580 "enable_recv_pipe": true, 00:24:53.580 "enable_quickack": false, 00:24:53.580 "enable_placement_id": 0, 00:24:53.580 "enable_zerocopy_send_server": true, 00:24:53.580 "enable_zerocopy_send_client": false, 00:24:53.580 "zerocopy_threshold": 0, 00:24:53.580 "tls_version": 0, 00:24:53.580 "enable_ktls": false 00:24:53.580 } 00:24:53.580 }, 00:24:53.580 { 00:24:53.580 "method": "sock_impl_set_options", 00:24:53.580 "params": { 00:24:53.580 "impl_name": "posix", 00:24:53.580 "recv_buf_size": 2097152, 00:24:53.580 "send_buf_size": 2097152, 00:24:53.580 "enable_recv_pipe": true, 00:24:53.580 "enable_quickack": false, 00:24:53.580 "enable_placement_id": 0, 00:24:53.580 "enable_zerocopy_send_server": true, 00:24:53.580 "enable_zerocopy_send_client": false, 00:24:53.580 "zerocopy_threshold": 0, 00:24:53.580 "tls_version": 0, 00:24:53.580 "enable_ktls": false 00:24:53.580 } 00:24:53.580 } 00:24:53.580 ] 00:24:53.580 }, 00:24:53.580 { 00:24:53.580 "subsystem": "vmd", 00:24:53.580 "config": [] 00:24:53.580 }, 00:24:53.580 { 00:24:53.580 "subsystem": "accel", 00:24:53.580 "config": [ 00:24:53.580 { 00:24:53.580 "method": "accel_set_options", 00:24:53.580 "params": { 00:24:53.580 "small_cache_size": 128, 00:24:53.580 "large_cache_size": 16, 00:24:53.580 "task_count": 2048, 00:24:53.580 "sequence_count": 2048, 00:24:53.580 "buf_count": 2048 00:24:53.580 } 00:24:53.580 } 00:24:53.580 ] 00:24:53.580 }, 00:24:53.580 { 00:24:53.580 "subsystem": "bdev", 00:24:53.580 "config": [ 00:24:53.580 { 00:24:53.580 "method": "bdev_set_options", 00:24:53.580 "params": { 00:24:53.580 "bdev_io_pool_size": 65535, 00:24:53.580 "bdev_io_cache_size": 256, 00:24:53.580 "bdev_auto_examine": true, 00:24:53.580 "iobuf_small_cache_size": 128, 00:24:53.580 "iobuf_large_cache_size": 16 00:24:53.580 } 00:24:53.580 }, 00:24:53.580 { 00:24:53.580 "method": "bdev_raid_set_options", 00:24:53.580 "params": { 00:24:53.580 "process_window_size_kb": 1024, 00:24:53.580 "process_max_bandwidth_mb_sec": 0 00:24:53.580 } 00:24:53.580 }, 00:24:53.580 { 00:24:53.580 "method": "bdev_iscsi_set_options", 00:24:53.581 "params": { 00:24:53.581 "timeout_sec": 30 00:24:53.581 } 00:24:53.581 }, 00:24:53.581 { 00:24:53.581 "method": "bdev_nvme_set_options", 00:24:53.581 "params": { 00:24:53.581 "action_on_timeout": "none", 00:24:53.581 "timeout_us": 0, 00:24:53.581 "timeout_admin_us": 0, 00:24:53.581 "keep_alive_timeout_ms": 10000, 00:24:53.581 "arbitration_burst": 0, 00:24:53.581 "low_priority_weight": 0, 00:24:53.581 "medium_priority_weight": 0, 00:24:53.581 "high_priority_weight": 0, 00:24:53.581 "nvme_adminq_poll_period_us": 10000, 00:24:53.581 "nvme_ioq_poll_period_us": 0, 00:24:53.581 "io_queue_requests": 512, 00:24:53.581 "delay_cmd_submit": true, 00:24:53.581 "transport_retry_count": 4, 00:24:53.581 "bdev_retry_count": 3, 00:24:53.581 "transport_ack_timeout": 0, 00:24:53.581 "ctrlr_loss_timeout_sec": 0, 00:24:53.581 "reconnect_delay_sec": 0, 00:24:53.581 "fast_io_fail_timeout_sec": 0, 00:24:53.581 "disable_auto_failback": false, 00:24:53.581 "generate_uuids": false, 00:24:53.581 "transport_tos": 0, 00:24:53.581 "nvme_error_stat": false, 00:24:53.581 "rdma_srq_size": 0, 00:24:53.581 "io_path_stat": false, 00:24:53.581 "allow_accel_sequence": false, 00:24:53.581 "rdma_max_cq_size": 0, 00:24:53.581 "rdma_cm_event_timeout_ms": 0, 00:24:53.581 "dhchap_digests": [ 00:24:53.581 "sha256", 00:24:53.581 "sha384", 00:24:53.581 "sha512" 00:24:53.581 ], 00:24:53.581 "dhchap_dhgroups": [ 00:24:53.581 "null", 00:24:53.581 "ffdhe2048", 00:24:53.581 "ffdhe3072", 00:24:53.581 "ffdhe4096", 00:24:53.581 "ffdhe6144", 00:24:53.581 "ffdhe8192" 00:24:53.581 ] 00:24:53.581 } 00:24:53.581 }, 00:24:53.581 { 00:24:53.581 "method": "bdev_nvme_attach_controller", 00:24:53.581 "params": { 00:24:53.581 "name": "nvme0", 00:24:53.581 "trtype": "TCP", 00:24:53.581 "adrfam": "IPv4", 00:24:53.581 "traddr": "10.0.0.2", 00:24:53.581 "trsvcid": "4420", 00:24:53.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.581 "prchk_reftag": false, 00:24:53.581 "prchk_guard": false, 00:24:53.581 "ctrlr_loss_timeout_sec": 0, 00:24:53.581 "reconnect_delay_sec": 0, 00:24:53.581 "fast_io_fail_timeout_sec": 0, 00:24:53.581 "psk": "key0", 00:24:53.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:53.581 "hdgst": false, 00:24:53.581 "ddgst": false, 00:24:53.581 "multipath": "multipath" 00:24:53.581 } 00:24:53.581 }, 00:24:53.581 { 00:24:53.581 "method": "bdev_nvme_set_hotplug", 00:24:53.581 "params": { 00:24:53.581 "period_us": 100000, 00:24:53.581 "enable": false 00:24:53.581 } 00:24:53.581 }, 00:24:53.581 { 00:24:53.581 "method": "bdev_enable_histogram", 00:24:53.581 "params": { 00:24:53.581 "name": "nvme0n1", 00:24:53.581 "enable": true 00:24:53.581 } 00:24:53.581 }, 00:24:53.581 { 00:24:53.581 "method": "bdev_wait_for_examine" 00:24:53.581 } 00:24:53.581 ] 00:24:53.581 }, 00:24:53.581 { 00:24:53.581 "subsystem": "nbd", 00:24:53.581 "config": [] 00:24:53.581 } 00:24:53.581 ] 00:24:53.581 }' 00:24:53.581 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:53.581 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.581 [2024-10-08 18:35:22.115170] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:24:53.581 [2024-10-08 18:35:22.115265] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247003 ] 00:24:53.840 [2024-10-08 18:35:22.184927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.840 [2024-10-08 18:35:22.311353] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.100 [2024-10-08 18:35:22.561991] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:55.036 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:55.036 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:55.036 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:55.036 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:55.296 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.296 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:55.554 Running I/O for 1 seconds... 00:24:56.489 2860.00 IOPS, 11.17 MiB/s 00:24:56.489 Latency(us) 00:24:56.489 [2024-10-08T16:35:25.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.489 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:56.489 Verification LBA range: start 0x0 length 0x2000 00:24:56.489 nvme0n1 : 1.02 2933.10 11.46 0.00 0.00 43162.48 6505.05 57089.14 00:24:56.489 [2024-10-08T16:35:25.026Z] =================================================================================================================== 00:24:56.489 [2024-10-08T16:35:25.027Z] Total : 2933.10 11.46 0.00 0.00 43162.48 6505.05 57089.14 00:24:56.490 { 00:24:56.490 "results": [ 00:24:56.490 { 00:24:56.490 "job": "nvme0n1", 00:24:56.490 "core_mask": "0x2", 00:24:56.490 "workload": "verify", 00:24:56.490 "status": "finished", 00:24:56.490 "verify_range": { 00:24:56.490 "start": 0, 00:24:56.490 "length": 8192 00:24:56.490 }, 00:24:56.490 "queue_depth": 128, 00:24:56.490 "io_size": 4096, 00:24:56.490 "runtime": 1.019057, 00:24:56.490 "iops": 2933.103840118855, 00:24:56.490 "mibps": 11.457436875464277, 00:24:56.490 "io_failed": 0, 00:24:56.490 "io_timeout": 0, 00:24:56.490 "avg_latency_us": 43162.47588714174, 00:24:56.490 "min_latency_us": 6505.054814814815, 00:24:56.490 "max_latency_us": 57089.137777777774 00:24:56.490 } 00:24:56.490 ], 00:24:56.490 "core_count": 1 00:24:56.490 } 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:56.490 nvmf_trace.0 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1247003 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1247003 ']' 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1247003 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:56.490 18:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1247003 00:24:56.748 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:56.748 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:56.748 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1247003' 00:24:56.748 killing process with pid 1247003 00:24:56.748 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1247003 00:24:56.748 Received shutdown signal, test time was about 1.000000 seconds 00:24:56.748 00:24:56.748 Latency(us) 00:24:56.748 [2024-10-08T16:35:25.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.748 [2024-10-08T16:35:25.285Z] =================================================================================================================== 00:24:56.748 [2024-10-08T16:35:25.285Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:56.748 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1247003 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.008 rmmod nvme_tcp 00:24:57.008 rmmod nvme_fabrics 00:24:57.008 rmmod nvme_keyring 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1246973 ']' 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1246973 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1246973 ']' 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1246973 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:57.008 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1246973 00:24:57.268 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:57.268 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:57.268 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1246973' 00:24:57.268 killing process with pid 1246973 00:24:57.268 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1246973 00:24:57.268 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1246973 00:24:57.532 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:57.532 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:57.533 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:57.533 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:57.533 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:24:57.533 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:57.533 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:24:57.533 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.533 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.533 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.533 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.533 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.WddADpWIw6 /tmp/tmp.fbTWEGouBg /tmp/tmp.zTtb9KJrKD 00:25:00.140 00:25:00.140 real 1m56.960s 00:25:00.140 user 3m25.855s 00:25:00.140 sys 0m32.495s 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.140 ************************************ 00:25:00.140 END TEST nvmf_tls 00:25:00.140 ************************************ 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:00.140 ************************************ 00:25:00.140 START TEST nvmf_fips 00:25:00.140 ************************************ 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:00.140 * Looking for test storage... 00:25:00.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:00.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.140 --rc genhtml_branch_coverage=1 00:25:00.140 --rc genhtml_function_coverage=1 00:25:00.140 --rc genhtml_legend=1 00:25:00.140 --rc geninfo_all_blocks=1 00:25:00.140 --rc geninfo_unexecuted_blocks=1 00:25:00.140 00:25:00.140 ' 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:00.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.140 --rc genhtml_branch_coverage=1 00:25:00.140 --rc genhtml_function_coverage=1 00:25:00.140 --rc genhtml_legend=1 00:25:00.140 --rc geninfo_all_blocks=1 00:25:00.140 --rc geninfo_unexecuted_blocks=1 00:25:00.140 00:25:00.140 ' 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:00.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.140 --rc genhtml_branch_coverage=1 00:25:00.140 --rc genhtml_function_coverage=1 00:25:00.140 --rc genhtml_legend=1 00:25:00.140 --rc geninfo_all_blocks=1 00:25:00.140 --rc geninfo_unexecuted_blocks=1 00:25:00.140 00:25:00.140 ' 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:00.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.140 --rc genhtml_branch_coverage=1 00:25:00.140 --rc genhtml_function_coverage=1 00:25:00.140 --rc genhtml_legend=1 00:25:00.140 --rc geninfo_all_blocks=1 00:25:00.140 --rc geninfo_unexecuted_blocks=1 00:25:00.140 00:25:00.140 ' 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.140 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:00.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:25:00.141 Error setting digest 00:25:00.141 40A2ADA7817F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:00.141 40A2ADA7817F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:00.141 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:25:00.142 18:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:03.446 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:03.446 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:03.446 Found net devices under 0000:84:00.0: cvl_0_0 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:03.446 Found net devices under 0000:84:00.1: cvl_0_1 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:03.446 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:03.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:25:03.447 00:25:03.447 --- 10.0.0.2 ping statistics --- 00:25:03.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.447 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:25:03.447 00:25:03.447 --- 10.0.0.1 ping statistics --- 00:25:03.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.447 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1249522 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1249522 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1249522 ']' 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:03.447 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:03.447 [2024-10-08 18:35:31.576711] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:25:03.447 [2024-10-08 18:35:31.576797] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.447 [2024-10-08 18:35:31.687105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.447 [2024-10-08 18:35:31.806873] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.447 [2024-10-08 18:35:31.806940] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.447 [2024-10-08 18:35:31.806957] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.447 [2024-10-08 18:35:31.806972] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.447 [2024-10-08 18:35:31.806983] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.447 [2024-10-08 18:35:31.807745] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.NZU 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.NZU 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.NZU 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.NZU 00:25:04.383 18:35:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:04.954 [2024-10-08 18:35:33.196891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.954 [2024-10-08 18:35:33.213821] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:04.954 [2024-10-08 18:35:33.214274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.954 malloc0 00:25:04.954 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:04.954 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1249801 00:25:04.954 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:04.954 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1249801 /var/tmp/bdevperf.sock 00:25:04.954 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1249801 ']' 00:25:04.954 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.954 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:04.954 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.954 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:04.954 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:04.954 [2024-10-08 18:35:33.399419] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:25:04.954 [2024-10-08 18:35:33.399527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249801 ] 00:25:05.213 [2024-10-08 18:35:33.502638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.213 [2024-10-08 18:35:33.703212] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.473 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.473 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:25:05.473 18:35:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.NZU 00:25:06.042 18:35:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:06.980 [2024-10-08 18:35:35.154790] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:06.980 TLSTESTn1 00:25:06.980 18:35:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:06.980 Running I/O for 10 seconds... 00:25:09.298 1490.00 IOPS, 5.82 MiB/s [2024-10-08T16:35:38.772Z] 1484.50 IOPS, 5.80 MiB/s [2024-10-08T16:35:39.708Z] 1479.67 IOPS, 5.78 MiB/s [2024-10-08T16:35:40.646Z] 1724.50 IOPS, 6.74 MiB/s [2024-10-08T16:35:41.580Z] 1722.60 IOPS, 6.73 MiB/s [2024-10-08T16:35:42.519Z] 1784.17 IOPS, 6.97 MiB/s [2024-10-08T16:35:43.457Z] 1800.86 IOPS, 7.03 MiB/s [2024-10-08T16:35:44.833Z] 1794.62 IOPS, 7.01 MiB/s [2024-10-08T16:35:45.771Z] 1836.67 IOPS, 7.17 MiB/s [2024-10-08T16:35:45.771Z] 1799.10 IOPS, 7.03 MiB/s 00:25:17.234 Latency(us) 00:25:17.234 [2024-10-08T16:35:45.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.234 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:17.234 Verification LBA range: start 0x0 length 0x2000 00:25:17.234 TLSTESTn1 : 10.05 1803.34 7.04 0.00 0.00 70780.26 16311.18 63302.92 00:25:17.234 [2024-10-08T16:35:45.771Z] =================================================================================================================== 00:25:17.234 [2024-10-08T16:35:45.771Z] Total : 1803.34 7.04 0.00 0.00 70780.26 16311.18 63302.92 00:25:17.234 { 00:25:17.234 "results": [ 00:25:17.234 { 00:25:17.234 "job": "TLSTESTn1", 00:25:17.234 "core_mask": "0x4", 00:25:17.234 "workload": "verify", 00:25:17.234 "status": "finished", 00:25:17.234 "verify_range": { 00:25:17.234 "start": 0, 00:25:17.234 "length": 8192 00:25:17.234 }, 00:25:17.234 "queue_depth": 128, 00:25:17.234 "io_size": 4096, 00:25:17.234 "runtime": 10.046902, 00:25:17.234 "iops": 1803.3419655133494, 00:25:17.234 "mibps": 7.044304552786521, 00:25:17.234 "io_failed": 0, 00:25:17.234 "io_timeout": 0, 00:25:17.234 "avg_latency_us": 70780.25720637957, 00:25:17.234 "min_latency_us": 16311.182222222222, 00:25:17.234 "max_latency_us": 63302.921481481484 00:25:17.234 } 00:25:17.234 ], 00:25:17.234 "core_count": 1 00:25:17.234 } 00:25:17.234 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:17.234 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:17.234 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:25:17.234 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:25:17.234 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:17.235 nvmf_trace.0 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1249801 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1249801 ']' 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1249801 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1249801 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1249801' 00:25:17.235 killing process with pid 1249801 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1249801 00:25:17.235 Received shutdown signal, test time was about 10.000000 seconds 00:25:17.235 00:25:17.235 Latency(us) 00:25:17.235 [2024-10-08T16:35:45.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.235 [2024-10-08T16:35:45.772Z] =================================================================================================================== 00:25:17.235 [2024-10-08T16:35:45.772Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.235 18:35:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1249801 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:17.803 rmmod nvme_tcp 00:25:17.803 rmmod nvme_fabrics 00:25:17.803 rmmod nvme_keyring 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1249522 ']' 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1249522 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1249522 ']' 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1249522 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1249522 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1249522' 00:25:17.803 killing process with pid 1249522 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1249522 00:25:17.803 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1249522 00:25:18.371 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:18.371 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:18.371 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:18.371 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:18.371 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:25:18.371 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:18.371 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:25:18.371 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.371 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.371 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.371 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.371 18:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.279 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:20.279 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.NZU 00:25:20.279 00:25:20.279 real 0m20.598s 00:25:20.279 user 0m27.561s 00:25:20.279 sys 0m6.840s 00:25:20.279 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:20.279 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:20.279 ************************************ 00:25:20.279 END TEST nvmf_fips 00:25:20.279 ************************************ 00:25:20.279 18:35:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:20.279 18:35:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:20.279 18:35:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:20.279 18:35:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:20.539 ************************************ 00:25:20.539 START TEST nvmf_control_msg_list 00:25:20.539 ************************************ 00:25:20.539 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:20.539 * Looking for test storage... 00:25:20.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:20.539 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:20.539 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:25:20.539 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:20.539 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.539 --rc genhtml_branch_coverage=1 00:25:20.539 --rc genhtml_function_coverage=1 00:25:20.539 --rc genhtml_legend=1 00:25:20.539 --rc geninfo_all_blocks=1 00:25:20.539 --rc geninfo_unexecuted_blocks=1 00:25:20.539 00:25:20.539 ' 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.539 --rc genhtml_branch_coverage=1 00:25:20.539 --rc genhtml_function_coverage=1 00:25:20.539 --rc genhtml_legend=1 00:25:20.539 --rc geninfo_all_blocks=1 00:25:20.539 --rc geninfo_unexecuted_blocks=1 00:25:20.539 00:25:20.539 ' 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.539 --rc genhtml_branch_coverage=1 00:25:20.539 --rc genhtml_function_coverage=1 00:25:20.539 --rc genhtml_legend=1 00:25:20.539 --rc geninfo_all_blocks=1 00:25:20.539 --rc geninfo_unexecuted_blocks=1 00:25:20.539 00:25:20.539 ' 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:20.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.539 --rc genhtml_branch_coverage=1 00:25:20.539 --rc genhtml_function_coverage=1 00:25:20.539 --rc genhtml_legend=1 00:25:20.539 --rc geninfo_all_blocks=1 00:25:20.539 --rc geninfo_unexecuted_blocks=1 00:25:20.539 00:25:20.539 ' 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.539 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:20.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:20.540 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:23.830 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:23.830 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:23.830 Found net devices under 0000:84:00.0: cvl_0_0 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:23.830 Found net devices under 0000:84:00.1: cvl_0_1 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.830 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:23.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:25:23.831 00:25:23.831 --- 10.0.0.2 ping statistics --- 00:25:23.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.831 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:25:23.831 00:25:23.831 --- 10.0.0.1 ping statistics --- 00:25:23.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.831 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1253325 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1253325 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1253325 ']' 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:23.831 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:23.831 [2024-10-08 18:35:52.086360] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:25:23.831 [2024-10-08 18:35:52.086533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.831 [2024-10-08 18:35:52.251870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.091 [2024-10-08 18:35:52.474968] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.091 [2024-10-08 18:35:52.475088] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.091 [2024-10-08 18:35:52.475124] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.091 [2024-10-08 18:35:52.475155] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.091 [2024-10-08 18:35:52.475182] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.091 [2024-10-08 18:35:52.476532] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:24.350 [2024-10-08 18:35:52.740623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:24.350 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:24.351 Malloc0 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:24.351 [2024-10-08 18:35:52.806875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1253470 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1253471 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1253472 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1253470 00:25:24.351 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:24.608 [2024-10-08 18:35:52.931902] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:24.608 [2024-10-08 18:35:52.932264] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:24.608 [2024-10-08 18:35:52.932593] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:25.982 Initializing NVMe Controllers 00:25:25.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:25.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:25.982 Initialization complete. Launching workers. 00:25:25.982 ======================================================== 00:25:25.982 Latency(us) 00:25:25.982 Device Information : IOPS MiB/s Average min max 00:25:25.982 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4197.99 16.40 237.57 167.65 700.89 00:25:25.982 ======================================================== 00:25:25.982 Total : 4197.99 16.40 237.57 167.65 700.89 00:25:25.982 00:25:25.982 Initializing NVMe Controllers 00:25:25.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:25.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:25.982 Initialization complete. Launching workers. 00:25:25.982 ======================================================== 00:25:25.982 Latency(us) 00:25:25.982 Device Information : IOPS MiB/s Average min max 00:25:25.982 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41191.28 40749.65 42217.58 00:25:25.982 ======================================================== 00:25:25.982 Total : 25.00 0.10 41191.28 40749.65 42217.58 00:25:25.982 00:25:25.982 Initializing NVMe Controllers 00:25:25.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:25.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:25.982 Initialization complete. Launching workers. 00:25:25.982 ======================================================== 00:25:25.982 Latency(us) 00:25:25.982 Device Information : IOPS MiB/s Average min max 00:25:25.982 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4176.00 16.31 238.92 163.69 777.61 00:25:25.982 ======================================================== 00:25:25.982 Total : 4176.00 16.31 238.92 163.69 777.61 00:25:25.982 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1253471 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1253472 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:25.982 rmmod nvme_tcp 00:25:25.982 rmmod nvme_fabrics 00:25:25.982 rmmod nvme_keyring 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1253325 ']' 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1253325 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1253325 ']' 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1253325 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1253325 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1253325' 00:25:25.982 killing process with pid 1253325 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1253325 00:25:25.982 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1253325 00:25:26.243 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:26.243 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:26.243 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:26.243 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:26.243 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:25:26.243 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:26.243 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:25:26.243 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:26.243 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:26.243 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.243 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.243 18:35:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.786 18:35:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:28.786 00:25:28.786 real 0m8.002s 00:25:28.786 user 0m6.969s 00:25:28.786 sys 0m3.829s 00:25:28.786 18:35:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:28.786 18:35:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:28.786 ************************************ 00:25:28.786 END TEST nvmf_control_msg_list 00:25:28.786 ************************************ 00:25:28.786 18:35:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:28.786 18:35:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:28.786 18:35:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:28.786 18:35:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:28.786 ************************************ 00:25:28.786 START TEST nvmf_wait_for_buf 00:25:28.786 ************************************ 00:25:28.786 18:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:28.786 * Looking for test storage... 00:25:28.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:28.786 18:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:28.786 18:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:25:28.786 18:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:28.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.786 --rc genhtml_branch_coverage=1 00:25:28.786 --rc genhtml_function_coverage=1 00:25:28.786 --rc genhtml_legend=1 00:25:28.786 --rc geninfo_all_blocks=1 00:25:28.786 --rc geninfo_unexecuted_blocks=1 00:25:28.786 00:25:28.786 ' 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:28.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.786 --rc genhtml_branch_coverage=1 00:25:28.786 --rc genhtml_function_coverage=1 00:25:28.786 --rc genhtml_legend=1 00:25:28.786 --rc geninfo_all_blocks=1 00:25:28.786 --rc geninfo_unexecuted_blocks=1 00:25:28.786 00:25:28.786 ' 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:28.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.786 --rc genhtml_branch_coverage=1 00:25:28.786 --rc genhtml_function_coverage=1 00:25:28.786 --rc genhtml_legend=1 00:25:28.786 --rc geninfo_all_blocks=1 00:25:28.786 --rc geninfo_unexecuted_blocks=1 00:25:28.786 00:25:28.786 ' 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:28.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.786 --rc genhtml_branch_coverage=1 00:25:28.786 --rc genhtml_function_coverage=1 00:25:28.786 --rc genhtml_legend=1 00:25:28.786 --rc geninfo_all_blocks=1 00:25:28.786 --rc geninfo_unexecuted_blocks=1 00:25:28.786 00:25:28.786 ' 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.786 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:28.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:28.787 18:35:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:32.075 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:32.076 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:32.076 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:32.076 Found net devices under 0000:84:00.0: cvl_0_0 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:32.076 Found net devices under 0000:84:00.1: cvl_0_1 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:32.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:25:32.076 00:25:32.076 --- 10.0.0.2 ping statistics --- 00:25:32.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.076 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:25:32.076 00:25:32.076 --- 10.0.0.1 ping statistics --- 00:25:32.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.076 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1255725 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1255725 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1255725 ']' 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:32.076 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.076 [2024-10-08 18:36:00.435111] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:25:32.076 [2024-10-08 18:36:00.435204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.076 [2024-10-08 18:36:00.512383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.334 [2024-10-08 18:36:00.637912] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.334 [2024-10-08 18:36:00.637969] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.334 [2024-10-08 18:36:00.637986] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.334 [2024-10-08 18:36:00.638000] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.334 [2024-10-08 18:36:00.638013] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.334 [2024-10-08 18:36:00.638762] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.335 Malloc0 00:25:32.335 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.592 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:32.592 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.592 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.592 [2024-10-08 18:36:00.875054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.592 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.592 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:32.592 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.592 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.592 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.592 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:32.592 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.593 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.593 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.593 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:32.593 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.593 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:32.593 [2024-10-08 18:36:00.899266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.593 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.593 18:36:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:32.593 [2024-10-08 18:36:01.007783] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:33.971 Initializing NVMe Controllers 00:25:33.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:33.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:33.971 Initialization complete. Launching workers. 00:25:33.971 ======================================================== 00:25:33.971 Latency(us) 00:25:33.971 Device Information : IOPS MiB/s Average min max 00:25:33.971 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 32.95 4.12 127770.62 31903.18 191532.57 00:25:33.971 ======================================================== 00:25:33.971 Total : 32.95 4.12 127770.62 31903.18 191532.57 00:25:33.971 00:25:33.971 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:33.971 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:33.971 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.971 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.971 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=502 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 502 -eq 0 ]] 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:34.229 rmmod nvme_tcp 00:25:34.229 rmmod nvme_fabrics 00:25:34.229 rmmod nvme_keyring 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1255725 ']' 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1255725 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1255725 ']' 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1255725 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1255725 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1255725' 00:25:34.229 killing process with pid 1255725 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1255725 00:25:34.229 18:36:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1255725 00:25:34.797 18:36:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:34.797 18:36:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:34.797 18:36:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:34.797 18:36:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:34.797 18:36:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:25:34.797 18:36:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:34.797 18:36:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:25:34.797 18:36:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:34.797 18:36:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:34.797 18:36:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.797 18:36:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.797 18:36:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.794 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:36.794 00:25:36.794 real 0m8.291s 00:25:36.794 user 0m3.978s 00:25:36.794 sys 0m2.985s 00:25:36.794 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:36.794 18:36:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:36.794 ************************************ 00:25:36.794 END TEST nvmf_wait_for_buf 00:25:36.794 ************************************ 00:25:36.794 18:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:25:36.794 18:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:36.794 18:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:25:36.794 18:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:25:36.794 18:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:25:36.794 18:36:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:40.082 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:40.082 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:25:40.082 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:40.082 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:40.082 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:40.082 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:40.082 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:40.082 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:25:40.082 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:40.082 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:40.083 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:40.083 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:40.083 Found net devices under 0000:84:00.0: cvl_0_0 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:40.083 Found net devices under 0000:84:00.1: cvl_0_1 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:40.083 ************************************ 00:25:40.083 START TEST nvmf_perf_adq 00:25:40.083 ************************************ 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:40.083 * Looking for test storage... 00:25:40.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:40.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.083 --rc genhtml_branch_coverage=1 00:25:40.083 --rc genhtml_function_coverage=1 00:25:40.083 --rc genhtml_legend=1 00:25:40.083 --rc geninfo_all_blocks=1 00:25:40.083 --rc geninfo_unexecuted_blocks=1 00:25:40.083 00:25:40.083 ' 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:40.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.083 --rc genhtml_branch_coverage=1 00:25:40.083 --rc genhtml_function_coverage=1 00:25:40.083 --rc genhtml_legend=1 00:25:40.083 --rc geninfo_all_blocks=1 00:25:40.083 --rc geninfo_unexecuted_blocks=1 00:25:40.083 00:25:40.083 ' 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:40.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.083 --rc genhtml_branch_coverage=1 00:25:40.083 --rc genhtml_function_coverage=1 00:25:40.083 --rc genhtml_legend=1 00:25:40.083 --rc geninfo_all_blocks=1 00:25:40.083 --rc geninfo_unexecuted_blocks=1 00:25:40.083 00:25:40.083 ' 00:25:40.083 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:40.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.084 --rc genhtml_branch_coverage=1 00:25:40.084 --rc genhtml_function_coverage=1 00:25:40.084 --rc genhtml_legend=1 00:25:40.084 --rc geninfo_all_blocks=1 00:25:40.084 --rc geninfo_unexecuted_blocks=1 00:25:40.084 00:25:40.084 ' 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:40.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:40.084 18:36:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:43.375 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:43.375 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:43.375 Found net devices under 0000:84:00.0: cvl_0_0 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:43.375 Found net devices under 0000:84:00.1: cvl_0_1 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.375 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:43.376 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:43.376 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:43.376 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:43.376 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:25:43.376 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:43.376 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:43.376 18:36:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:45.913 18:36:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:51.195 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:51.195 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:51.195 Found net devices under 0000:84:00.0: cvl_0_0 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:51.195 Found net devices under 0000:84:00.1: cvl_0_1 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:51.195 18:36:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:51.195 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:51.195 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.195 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:51.195 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:51.195 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:51.195 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:51.195 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:51.195 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:51.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:25:51.195 00:25:51.195 --- 10.0.0.2 ping statistics --- 00:25:51.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.195 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:51.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:25:51.196 00:25:51.196 --- 10.0.0.1 ping statistics --- 00:25:51.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.196 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1261306 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1261306 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1261306 ']' 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:51.196 18:36:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:51.196 [2024-10-08 18:36:19.233236] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:25:51.196 [2024-10-08 18:36:19.233343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.196 [2024-10-08 18:36:19.349480] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:51.196 [2024-10-08 18:36:19.526023] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.196 [2024-10-08 18:36:19.526090] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.196 [2024-10-08 18:36:19.526107] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.196 [2024-10-08 18:36:19.526121] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.196 [2024-10-08 18:36:19.526133] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.196 [2024-10-08 18:36:19.528073] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.196 [2024-10-08 18:36:19.528126] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:51.196 [2024-10-08 18:36:19.530673] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:25:51.196 [2024-10-08 18:36:19.530679] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 [2024-10-08 18:36:20.574832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 Malloc1 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:52.128 [2024-10-08 18:36:20.628362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1261475 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:25:52.128 18:36:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:54.656 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:25:54.656 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.656 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:54.656 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.656 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:25:54.656 "tick_rate": 2700000000, 00:25:54.656 "poll_groups": [ 00:25:54.656 { 00:25:54.656 "name": "nvmf_tgt_poll_group_000", 00:25:54.656 "admin_qpairs": 1, 00:25:54.656 "io_qpairs": 1, 00:25:54.656 "current_admin_qpairs": 1, 00:25:54.656 "current_io_qpairs": 1, 00:25:54.656 "pending_bdev_io": 0, 00:25:54.656 "completed_nvme_io": 19650, 00:25:54.656 "transports": [ 00:25:54.656 { 00:25:54.656 "trtype": "TCP" 00:25:54.656 } 00:25:54.656 ] 00:25:54.656 }, 00:25:54.656 { 00:25:54.656 "name": "nvmf_tgt_poll_group_001", 00:25:54.656 "admin_qpairs": 0, 00:25:54.656 "io_qpairs": 1, 00:25:54.656 "current_admin_qpairs": 0, 00:25:54.656 "current_io_qpairs": 1, 00:25:54.656 "pending_bdev_io": 0, 00:25:54.656 "completed_nvme_io": 20084, 00:25:54.656 "transports": [ 00:25:54.656 { 00:25:54.656 "trtype": "TCP" 00:25:54.656 } 00:25:54.656 ] 00:25:54.656 }, 00:25:54.656 { 00:25:54.656 "name": "nvmf_tgt_poll_group_002", 00:25:54.656 "admin_qpairs": 0, 00:25:54.656 "io_qpairs": 1, 00:25:54.656 "current_admin_qpairs": 0, 00:25:54.656 "current_io_qpairs": 1, 00:25:54.656 "pending_bdev_io": 0, 00:25:54.656 "completed_nvme_io": 20428, 00:25:54.656 "transports": [ 00:25:54.656 { 00:25:54.656 "trtype": "TCP" 00:25:54.656 } 00:25:54.656 ] 00:25:54.656 }, 00:25:54.656 { 00:25:54.656 "name": "nvmf_tgt_poll_group_003", 00:25:54.656 "admin_qpairs": 0, 00:25:54.656 "io_qpairs": 1, 00:25:54.656 "current_admin_qpairs": 0, 00:25:54.656 "current_io_qpairs": 1, 00:25:54.656 "pending_bdev_io": 0, 00:25:54.656 "completed_nvme_io": 19625, 00:25:54.656 "transports": [ 00:25:54.656 { 00:25:54.656 "trtype": "TCP" 00:25:54.656 } 00:25:54.656 ] 00:25:54.656 } 00:25:54.656 ] 00:25:54.656 }' 00:25:54.656 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:54.656 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:25:54.656 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:25:54.656 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:25:54.656 18:36:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1261475 00:26:02.766 Initializing NVMe Controllers 00:26:02.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:02.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:02.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:02.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:02.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:02.766 Initialization complete. Launching workers. 00:26:02.766 ======================================================== 00:26:02.766 Latency(us) 00:26:02.766 Device Information : IOPS MiB/s Average min max 00:26:02.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10034.20 39.20 6377.57 2501.68 10838.50 00:26:02.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10170.70 39.73 6293.52 2169.16 10473.72 00:26:02.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10395.50 40.61 6157.13 2461.86 10352.71 00:26:02.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10054.70 39.28 6365.10 2706.78 10520.90 00:26:02.766 ======================================================== 00:26:02.766 Total : 40655.10 158.81 6297.09 2169.16 10838.50 00:26:02.766 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.766 rmmod nvme_tcp 00:26:02.766 rmmod nvme_fabrics 00:26:02.766 rmmod nvme_keyring 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1261306 ']' 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1261306 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1261306 ']' 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1261306 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1261306 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1261306' 00:26:02.766 killing process with pid 1261306 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1261306 00:26:02.766 18:36:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1261306 00:26:02.766 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:02.766 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:02.766 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:02.766 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:02.766 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:26:02.766 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:02.766 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:26:02.766 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.766 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:02.766 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.766 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.766 18:36:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.304 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:05.304 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:26:05.304 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:05.304 18:36:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:05.873 18:36:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:26:07.780 18:36:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:13.061 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:13.062 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:13.062 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:13.062 Found net devices under 0000:84:00.0: cvl_0_0 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:13.062 Found net devices under 0000:84:00.1: cvl_0_1 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:13.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:26:13.062 00:26:13.062 --- 10.0.0.2 ping statistics --- 00:26:13.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.062 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:26:13.062 00:26:13.062 --- 10.0.0.1 ping statistics --- 00:26:13.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.062 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:13.062 net.core.busy_poll = 1 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:13.062 net.core.busy_read = 1 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:13.062 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:13.063 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:13.063 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:13.063 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:13.063 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.063 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1264081 00:26:13.063 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:13.063 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1264081 00:26:13.063 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1264081 ']' 00:26:13.063 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.063 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.063 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.063 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.063 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.321 [2024-10-08 18:36:41.649639] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:26:13.321 [2024-10-08 18:36:41.649741] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.321 [2024-10-08 18:36:41.732704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.579 [2024-10-08 18:36:41.878202] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.579 [2024-10-08 18:36:41.878278] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.579 [2024-10-08 18:36:41.878300] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.579 [2024-10-08 18:36:41.878316] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.579 [2024-10-08 18:36:41.878330] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.579 [2024-10-08 18:36:41.880596] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.579 [2024-10-08 18:36:41.880668] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.579 [2024-10-08 18:36:41.880728] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.579 [2024-10-08 18:36:41.880732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.579 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:13.579 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:26:13.579 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:13.579 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:13.579 18:36:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.579 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.837 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.837 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.838 [2024-10-08 18:36:42.179362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.838 Malloc1 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:13.838 [2024-10-08 18:36:42.232492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1264120 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:26:13.838 18:36:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:15.737 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:26:15.737 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.737 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:15.737 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.737 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:26:15.737 "tick_rate": 2700000000, 00:26:15.737 "poll_groups": [ 00:26:15.737 { 00:26:15.737 "name": "nvmf_tgt_poll_group_000", 00:26:15.737 "admin_qpairs": 1, 00:26:15.737 "io_qpairs": 3, 00:26:15.737 "current_admin_qpairs": 1, 00:26:15.737 "current_io_qpairs": 3, 00:26:15.737 "pending_bdev_io": 0, 00:26:15.737 "completed_nvme_io": 24963, 00:26:15.737 "transports": [ 00:26:15.737 { 00:26:15.737 "trtype": "TCP" 00:26:15.737 } 00:26:15.737 ] 00:26:15.737 }, 00:26:15.737 { 00:26:15.737 "name": "nvmf_tgt_poll_group_001", 00:26:15.737 "admin_qpairs": 0, 00:26:15.737 "io_qpairs": 1, 00:26:15.737 "current_admin_qpairs": 0, 00:26:15.737 "current_io_qpairs": 1, 00:26:15.737 "pending_bdev_io": 0, 00:26:15.737 "completed_nvme_io": 23273, 00:26:15.737 "transports": [ 00:26:15.737 { 00:26:15.737 "trtype": "TCP" 00:26:15.737 } 00:26:15.737 ] 00:26:15.737 }, 00:26:15.737 { 00:26:15.737 "name": "nvmf_tgt_poll_group_002", 00:26:15.737 "admin_qpairs": 0, 00:26:15.737 "io_qpairs": 0, 00:26:15.737 "current_admin_qpairs": 0, 00:26:15.737 "current_io_qpairs": 0, 00:26:15.737 "pending_bdev_io": 0, 00:26:15.737 "completed_nvme_io": 0, 00:26:15.737 "transports": [ 00:26:15.737 { 00:26:15.737 "trtype": "TCP" 00:26:15.737 } 00:26:15.737 ] 00:26:15.737 }, 00:26:15.737 { 00:26:15.737 "name": "nvmf_tgt_poll_group_003", 00:26:15.737 "admin_qpairs": 0, 00:26:15.737 "io_qpairs": 0, 00:26:15.737 "current_admin_qpairs": 0, 00:26:15.737 "current_io_qpairs": 0, 00:26:15.737 "pending_bdev_io": 0, 00:26:15.737 "completed_nvme_io": 0, 00:26:15.737 "transports": [ 00:26:15.737 { 00:26:15.737 "trtype": "TCP" 00:26:15.737 } 00:26:15.737 ] 00:26:15.737 } 00:26:15.737 ] 00:26:15.737 }' 00:26:15.737 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:15.737 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:26:15.995 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:26:15.995 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:26:15.995 18:36:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1264120 00:26:24.099 Initializing NVMe Controllers 00:26:24.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:24.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:24.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:24.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:24.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:24.099 Initialization complete. Launching workers. 00:26:24.099 ======================================================== 00:26:24.099 Latency(us) 00:26:24.099 Device Information : IOPS MiB/s Average min max 00:26:24.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4588.60 17.92 13987.10 2256.46 61778.14 00:26:24.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4605.50 17.99 13895.41 2273.70 62109.58 00:26:24.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13092.00 51.14 4888.01 1900.23 46940.92 00:26:24.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4176.20 16.31 15386.40 2272.62 64136.74 00:26:24.099 ======================================================== 00:26:24.099 Total : 26462.30 103.37 9690.28 1900.23 64136.74 00:26:24.099 00:26:24.099 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:26:24.099 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:24.099 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:24.099 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:24.099 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:24.099 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:24.099 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:24.100 rmmod nvme_tcp 00:26:24.100 rmmod nvme_fabrics 00:26:24.100 rmmod nvme_keyring 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1264081 ']' 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1264081 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1264081 ']' 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1264081 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1264081 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1264081' 00:26:24.100 killing process with pid 1264081 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1264081 00:26:24.100 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1264081 00:26:24.669 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:24.669 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:24.669 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:24.669 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:24.669 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:26:24.669 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:24.669 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:26:24.669 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:24.669 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:24.669 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.669 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.669 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.964 18:36:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:26:27.964 00:26:27.964 real 0m47.907s 00:26:27.964 user 2m45.167s 00:26:27.964 sys 0m10.646s 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:27.964 ************************************ 00:26:27.964 END TEST nvmf_perf_adq 00:26:27.964 ************************************ 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:27.964 ************************************ 00:26:27.964 START TEST nvmf_shutdown 00:26:27.964 ************************************ 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:27.964 * Looking for test storage... 00:26:27.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:27.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.964 --rc genhtml_branch_coverage=1 00:26:27.964 --rc genhtml_function_coverage=1 00:26:27.964 --rc genhtml_legend=1 00:26:27.964 --rc geninfo_all_blocks=1 00:26:27.964 --rc geninfo_unexecuted_blocks=1 00:26:27.964 00:26:27.964 ' 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:27.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.964 --rc genhtml_branch_coverage=1 00:26:27.964 --rc genhtml_function_coverage=1 00:26:27.964 --rc genhtml_legend=1 00:26:27.964 --rc geninfo_all_blocks=1 00:26:27.964 --rc geninfo_unexecuted_blocks=1 00:26:27.964 00:26:27.964 ' 00:26:27.964 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:27.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.964 --rc genhtml_branch_coverage=1 00:26:27.964 --rc genhtml_function_coverage=1 00:26:27.964 --rc genhtml_legend=1 00:26:27.964 --rc geninfo_all_blocks=1 00:26:27.964 --rc geninfo_unexecuted_blocks=1 00:26:27.964 00:26:27.964 ' 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:27.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.965 --rc genhtml_branch_coverage=1 00:26:27.965 --rc genhtml_function_coverage=1 00:26:27.965 --rc genhtml_legend=1 00:26:27.965 --rc geninfo_all_blocks=1 00:26:27.965 --rc geninfo_unexecuted_blocks=1 00:26:27.965 00:26:27.965 ' 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:27.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:27.965 ************************************ 00:26:27.965 START TEST nvmf_shutdown_tc1 00:26:27.965 ************************************ 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:27.965 18:36:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:30.576 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:30.576 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:30.576 Found net devices under 0000:84:00.0: cvl_0_0 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.576 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:30.577 Found net devices under 0000:84:00.1: cvl_0_1 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.577 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:30.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:26:30.837 00:26:30.837 --- 10.0.0.2 ping statistics --- 00:26:30.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.837 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:26:30.837 00:26:30.837 --- 10.0.0.1 ping statistics --- 00:26:30.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.837 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1267543 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1267543 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1267543 ']' 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.837 18:36:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:31.098 [2024-10-08 18:36:59.381870] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:26:31.098 [2024-10-08 18:36:59.382048] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.098 [2024-10-08 18:36:59.540899] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:31.357 [2024-10-08 18:36:59.760020] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.357 [2024-10-08 18:36:59.760144] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.357 [2024-10-08 18:36:59.760182] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.357 [2024-10-08 18:36:59.760213] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.357 [2024-10-08 18:36:59.760240] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.357 [2024-10-08 18:36:59.763960] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.357 [2024-10-08 18:36:59.764063] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:31.357 [2024-10-08 18:36:59.764116] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:31.357 [2024-10-08 18:36:59.764120] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 [2024-10-08 18:37:00.598180] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.290 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.290 Malloc1 00:26:32.290 [2024-10-08 18:37:00.698881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.290 Malloc2 00:26:32.290 Malloc3 00:26:32.290 Malloc4 00:26:32.547 Malloc5 00:26:32.547 Malloc6 00:26:32.547 Malloc7 00:26:32.547 Malloc8 00:26:32.547 Malloc9 00:26:32.805 Malloc10 00:26:32.805 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.805 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:32.805 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:32.805 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1267736 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1267736 /var/tmp/bdevperf.sock 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1267736 ']' 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:32.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:32.806 { 00:26:32.806 "params": { 00:26:32.806 "name": "Nvme$subsystem", 00:26:32.806 "trtype": "$TEST_TRANSPORT", 00:26:32.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.806 "adrfam": "ipv4", 00:26:32.806 "trsvcid": "$NVMF_PORT", 00:26:32.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.806 "hdgst": ${hdgst:-false}, 00:26:32.806 "ddgst": ${ddgst:-false} 00:26:32.806 }, 00:26:32.806 "method": "bdev_nvme_attach_controller" 00:26:32.806 } 00:26:32.806 EOF 00:26:32.806 )") 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:32.806 { 00:26:32.806 "params": { 00:26:32.806 "name": "Nvme$subsystem", 00:26:32.806 "trtype": "$TEST_TRANSPORT", 00:26:32.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.806 "adrfam": "ipv4", 00:26:32.806 "trsvcid": "$NVMF_PORT", 00:26:32.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.806 "hdgst": ${hdgst:-false}, 00:26:32.806 "ddgst": ${ddgst:-false} 00:26:32.806 }, 00:26:32.806 "method": "bdev_nvme_attach_controller" 00:26:32.806 } 00:26:32.806 EOF 00:26:32.806 )") 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:32.806 { 00:26:32.806 "params": { 00:26:32.806 "name": "Nvme$subsystem", 00:26:32.806 "trtype": "$TEST_TRANSPORT", 00:26:32.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.806 "adrfam": "ipv4", 00:26:32.806 "trsvcid": "$NVMF_PORT", 00:26:32.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.806 "hdgst": ${hdgst:-false}, 00:26:32.806 "ddgst": ${ddgst:-false} 00:26:32.806 }, 00:26:32.806 "method": "bdev_nvme_attach_controller" 00:26:32.806 } 00:26:32.806 EOF 00:26:32.806 )") 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:32.806 { 00:26:32.806 "params": { 00:26:32.806 "name": "Nvme$subsystem", 00:26:32.806 "trtype": "$TEST_TRANSPORT", 00:26:32.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.806 "adrfam": "ipv4", 00:26:32.806 "trsvcid": "$NVMF_PORT", 00:26:32.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.806 "hdgst": ${hdgst:-false}, 00:26:32.806 "ddgst": ${ddgst:-false} 00:26:32.806 }, 00:26:32.806 "method": "bdev_nvme_attach_controller" 00:26:32.806 } 00:26:32.806 EOF 00:26:32.806 )") 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:32.806 { 00:26:32.806 "params": { 00:26:32.806 "name": "Nvme$subsystem", 00:26:32.806 "trtype": "$TEST_TRANSPORT", 00:26:32.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.806 "adrfam": "ipv4", 00:26:32.806 "trsvcid": "$NVMF_PORT", 00:26:32.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.806 "hdgst": ${hdgst:-false}, 00:26:32.806 "ddgst": ${ddgst:-false} 00:26:32.806 }, 00:26:32.806 "method": "bdev_nvme_attach_controller" 00:26:32.806 } 00:26:32.806 EOF 00:26:32.806 )") 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:32.806 { 00:26:32.806 "params": { 00:26:32.806 "name": "Nvme$subsystem", 00:26:32.806 "trtype": "$TEST_TRANSPORT", 00:26:32.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.806 "adrfam": "ipv4", 00:26:32.806 "trsvcid": "$NVMF_PORT", 00:26:32.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.806 "hdgst": ${hdgst:-false}, 00:26:32.806 "ddgst": ${ddgst:-false} 00:26:32.806 }, 00:26:32.806 "method": "bdev_nvme_attach_controller" 00:26:32.806 } 00:26:32.806 EOF 00:26:32.806 )") 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:32.806 { 00:26:32.806 "params": { 00:26:32.806 "name": "Nvme$subsystem", 00:26:32.806 "trtype": "$TEST_TRANSPORT", 00:26:32.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.806 "adrfam": "ipv4", 00:26:32.806 "trsvcid": "$NVMF_PORT", 00:26:32.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.806 "hdgst": ${hdgst:-false}, 00:26:32.806 "ddgst": ${ddgst:-false} 00:26:32.806 }, 00:26:32.806 "method": "bdev_nvme_attach_controller" 00:26:32.806 } 00:26:32.806 EOF 00:26:32.806 )") 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:32.806 { 00:26:32.806 "params": { 00:26:32.806 "name": "Nvme$subsystem", 00:26:32.806 "trtype": "$TEST_TRANSPORT", 00:26:32.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.806 "adrfam": "ipv4", 00:26:32.806 "trsvcid": "$NVMF_PORT", 00:26:32.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.806 "hdgst": ${hdgst:-false}, 00:26:32.806 "ddgst": ${ddgst:-false} 00:26:32.806 }, 00:26:32.806 "method": "bdev_nvme_attach_controller" 00:26:32.806 } 00:26:32.806 EOF 00:26:32.806 )") 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:32.806 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:32.806 { 00:26:32.806 "params": { 00:26:32.806 "name": "Nvme$subsystem", 00:26:32.806 "trtype": "$TEST_TRANSPORT", 00:26:32.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.806 "adrfam": "ipv4", 00:26:32.806 "trsvcid": "$NVMF_PORT", 00:26:32.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.806 "hdgst": ${hdgst:-false}, 00:26:32.806 "ddgst": ${ddgst:-false} 00:26:32.806 }, 00:26:32.806 "method": "bdev_nvme_attach_controller" 00:26:32.806 } 00:26:32.807 EOF 00:26:32.807 )") 00:26:32.807 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:32.807 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:32.807 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:32.807 { 00:26:32.807 "params": { 00:26:32.807 "name": "Nvme$subsystem", 00:26:32.807 "trtype": "$TEST_TRANSPORT", 00:26:32.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.807 "adrfam": "ipv4", 00:26:32.807 "trsvcid": "$NVMF_PORT", 00:26:32.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.807 "hdgst": ${hdgst:-false}, 00:26:32.807 "ddgst": ${ddgst:-false} 00:26:32.807 }, 00:26:32.807 "method": "bdev_nvme_attach_controller" 00:26:32.807 } 00:26:32.807 EOF 00:26:32.807 )") 00:26:32.807 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:32.807 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:26:32.807 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:26:32.807 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:32.807 "params": { 00:26:32.807 "name": "Nvme1", 00:26:32.807 "trtype": "tcp", 00:26:32.807 "traddr": "10.0.0.2", 00:26:32.807 "adrfam": "ipv4", 00:26:32.807 "trsvcid": "4420", 00:26:32.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:32.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:32.807 "hdgst": false, 00:26:32.807 "ddgst": false 00:26:32.807 }, 00:26:32.807 "method": "bdev_nvme_attach_controller" 00:26:32.807 },{ 00:26:32.807 "params": { 00:26:32.807 "name": "Nvme2", 00:26:32.807 "trtype": "tcp", 00:26:32.807 "traddr": "10.0.0.2", 00:26:32.807 "adrfam": "ipv4", 00:26:32.807 "trsvcid": "4420", 00:26:32.807 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:32.807 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:32.807 "hdgst": false, 00:26:32.807 "ddgst": false 00:26:32.807 }, 00:26:32.807 "method": "bdev_nvme_attach_controller" 00:26:32.807 },{ 00:26:32.807 "params": { 00:26:32.807 "name": "Nvme3", 00:26:32.807 "trtype": "tcp", 00:26:32.807 "traddr": "10.0.0.2", 00:26:32.807 "adrfam": "ipv4", 00:26:32.807 "trsvcid": "4420", 00:26:32.807 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:32.807 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:32.807 "hdgst": false, 00:26:32.807 "ddgst": false 00:26:32.807 }, 00:26:32.807 "method": "bdev_nvme_attach_controller" 00:26:32.807 },{ 00:26:32.807 "params": { 00:26:32.807 "name": "Nvme4", 00:26:32.807 "trtype": "tcp", 00:26:32.807 "traddr": "10.0.0.2", 00:26:32.807 "adrfam": "ipv4", 00:26:32.807 "trsvcid": "4420", 00:26:32.807 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:32.807 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:32.807 "hdgst": false, 00:26:32.807 "ddgst": false 00:26:32.807 }, 00:26:32.807 "method": "bdev_nvme_attach_controller" 00:26:32.807 },{ 00:26:32.807 "params": { 00:26:32.807 "name": "Nvme5", 00:26:32.807 "trtype": "tcp", 00:26:32.807 "traddr": "10.0.0.2", 00:26:32.807 "adrfam": "ipv4", 00:26:32.807 "trsvcid": "4420", 00:26:32.807 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:32.807 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:32.807 "hdgst": false, 00:26:32.807 "ddgst": false 00:26:32.807 }, 00:26:32.807 "method": "bdev_nvme_attach_controller" 00:26:32.807 },{ 00:26:32.807 "params": { 00:26:32.807 "name": "Nvme6", 00:26:32.807 "trtype": "tcp", 00:26:32.807 "traddr": "10.0.0.2", 00:26:32.807 "adrfam": "ipv4", 00:26:32.807 "trsvcid": "4420", 00:26:32.807 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:32.807 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:32.807 "hdgst": false, 00:26:32.807 "ddgst": false 00:26:32.807 }, 00:26:32.807 "method": "bdev_nvme_attach_controller" 00:26:32.807 },{ 00:26:32.807 "params": { 00:26:32.807 "name": "Nvme7", 00:26:32.807 "trtype": "tcp", 00:26:32.807 "traddr": "10.0.0.2", 00:26:32.807 "adrfam": "ipv4", 00:26:32.807 "trsvcid": "4420", 00:26:32.807 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:32.807 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:32.807 "hdgst": false, 00:26:32.807 "ddgst": false 00:26:32.807 }, 00:26:32.807 "method": "bdev_nvme_attach_controller" 00:26:32.807 },{ 00:26:32.807 "params": { 00:26:32.807 "name": "Nvme8", 00:26:32.807 "trtype": "tcp", 00:26:32.807 "traddr": "10.0.0.2", 00:26:32.807 "adrfam": "ipv4", 00:26:32.807 "trsvcid": "4420", 00:26:32.807 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:32.807 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:32.807 "hdgst": false, 00:26:32.807 "ddgst": false 00:26:32.807 }, 00:26:32.807 "method": "bdev_nvme_attach_controller" 00:26:32.807 },{ 00:26:32.807 "params": { 00:26:32.807 "name": "Nvme9", 00:26:32.807 "trtype": "tcp", 00:26:32.807 "traddr": "10.0.0.2", 00:26:32.807 "adrfam": "ipv4", 00:26:32.807 "trsvcid": "4420", 00:26:32.807 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:32.807 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:32.807 "hdgst": false, 00:26:32.807 "ddgst": false 00:26:32.807 }, 00:26:32.807 "method": "bdev_nvme_attach_controller" 00:26:32.807 },{ 00:26:32.807 "params": { 00:26:32.807 "name": "Nvme10", 00:26:32.807 "trtype": "tcp", 00:26:32.807 "traddr": "10.0.0.2", 00:26:32.807 "adrfam": "ipv4", 00:26:32.807 "trsvcid": "4420", 00:26:32.807 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:32.807 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:32.807 "hdgst": false, 00:26:32.807 "ddgst": false 00:26:32.807 }, 00:26:32.807 "method": "bdev_nvme_attach_controller" 00:26:32.807 }' 00:26:32.807 [2024-10-08 18:37:01.197838] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:26:32.807 [2024-10-08 18:37:01.197926] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:32.807 [2024-10-08 18:37:01.266074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.065 [2024-10-08 18:37:01.379032] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.587 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:35.587 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:26:35.587 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:35.587 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.587 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:35.587 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.587 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1267736 00:26:35.587 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:26:35.587 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:26:36.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1267736 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1267543 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.152 { 00:26:36.152 "params": { 00:26:36.152 "name": "Nvme$subsystem", 00:26:36.152 "trtype": "$TEST_TRANSPORT", 00:26:36.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.152 "adrfam": "ipv4", 00:26:36.152 "trsvcid": "$NVMF_PORT", 00:26:36.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.152 "hdgst": ${hdgst:-false}, 00:26:36.152 "ddgst": ${ddgst:-false} 00:26:36.152 }, 00:26:36.152 "method": "bdev_nvme_attach_controller" 00:26:36.152 } 00:26:36.152 EOF 00:26:36.152 )") 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.152 { 00:26:36.152 "params": { 00:26:36.152 "name": "Nvme$subsystem", 00:26:36.152 "trtype": "$TEST_TRANSPORT", 00:26:36.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.152 "adrfam": "ipv4", 00:26:36.152 "trsvcid": "$NVMF_PORT", 00:26:36.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.152 "hdgst": ${hdgst:-false}, 00:26:36.152 "ddgst": ${ddgst:-false} 00:26:36.152 }, 00:26:36.152 "method": "bdev_nvme_attach_controller" 00:26:36.152 } 00:26:36.152 EOF 00:26:36.152 )") 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.152 { 00:26:36.152 "params": { 00:26:36.152 "name": "Nvme$subsystem", 00:26:36.152 "trtype": "$TEST_TRANSPORT", 00:26:36.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.152 "adrfam": "ipv4", 00:26:36.152 "trsvcid": "$NVMF_PORT", 00:26:36.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.152 "hdgst": ${hdgst:-false}, 00:26:36.152 "ddgst": ${ddgst:-false} 00:26:36.152 }, 00:26:36.152 "method": "bdev_nvme_attach_controller" 00:26:36.152 } 00:26:36.152 EOF 00:26:36.152 )") 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.152 { 00:26:36.152 "params": { 00:26:36.152 "name": "Nvme$subsystem", 00:26:36.152 "trtype": "$TEST_TRANSPORT", 00:26:36.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.152 "adrfam": "ipv4", 00:26:36.152 "trsvcid": "$NVMF_PORT", 00:26:36.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.152 "hdgst": ${hdgst:-false}, 00:26:36.152 "ddgst": ${ddgst:-false} 00:26:36.152 }, 00:26:36.152 "method": "bdev_nvme_attach_controller" 00:26:36.152 } 00:26:36.152 EOF 00:26:36.152 )") 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.152 { 00:26:36.152 "params": { 00:26:36.152 "name": "Nvme$subsystem", 00:26:36.152 "trtype": "$TEST_TRANSPORT", 00:26:36.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.152 "adrfam": "ipv4", 00:26:36.152 "trsvcid": "$NVMF_PORT", 00:26:36.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.152 "hdgst": ${hdgst:-false}, 00:26:36.152 "ddgst": ${ddgst:-false} 00:26:36.152 }, 00:26:36.152 "method": "bdev_nvme_attach_controller" 00:26:36.152 } 00:26:36.152 EOF 00:26:36.152 )") 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.152 { 00:26:36.152 "params": { 00:26:36.152 "name": "Nvme$subsystem", 00:26:36.152 "trtype": "$TEST_TRANSPORT", 00:26:36.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.152 "adrfam": "ipv4", 00:26:36.152 "trsvcid": "$NVMF_PORT", 00:26:36.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.152 "hdgst": ${hdgst:-false}, 00:26:36.152 "ddgst": ${ddgst:-false} 00:26:36.152 }, 00:26:36.152 "method": "bdev_nvme_attach_controller" 00:26:36.152 } 00:26:36.152 EOF 00:26:36.152 )") 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.152 { 00:26:36.152 "params": { 00:26:36.152 "name": "Nvme$subsystem", 00:26:36.152 "trtype": "$TEST_TRANSPORT", 00:26:36.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.152 "adrfam": "ipv4", 00:26:36.152 "trsvcid": "$NVMF_PORT", 00:26:36.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.152 "hdgst": ${hdgst:-false}, 00:26:36.152 "ddgst": ${ddgst:-false} 00:26:36.152 }, 00:26:36.152 "method": "bdev_nvme_attach_controller" 00:26:36.152 } 00:26:36.152 EOF 00:26:36.152 )") 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.152 { 00:26:36.152 "params": { 00:26:36.152 "name": "Nvme$subsystem", 00:26:36.152 "trtype": "$TEST_TRANSPORT", 00:26:36.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.152 "adrfam": "ipv4", 00:26:36.152 "trsvcid": "$NVMF_PORT", 00:26:36.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.152 "hdgst": ${hdgst:-false}, 00:26:36.152 "ddgst": ${ddgst:-false} 00:26:36.152 }, 00:26:36.152 "method": "bdev_nvme_attach_controller" 00:26:36.152 } 00:26:36.152 EOF 00:26:36.152 )") 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.152 { 00:26:36.152 "params": { 00:26:36.152 "name": "Nvme$subsystem", 00:26:36.152 "trtype": "$TEST_TRANSPORT", 00:26:36.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.152 "adrfam": "ipv4", 00:26:36.152 "trsvcid": "$NVMF_PORT", 00:26:36.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.152 "hdgst": ${hdgst:-false}, 00:26:36.152 "ddgst": ${ddgst:-false} 00:26:36.152 }, 00:26:36.152 "method": "bdev_nvme_attach_controller" 00:26:36.152 } 00:26:36.152 EOF 00:26:36.152 )") 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:36.152 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:36.153 { 00:26:36.153 "params": { 00:26:36.153 "name": "Nvme$subsystem", 00:26:36.153 "trtype": "$TEST_TRANSPORT", 00:26:36.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.153 "adrfam": "ipv4", 00:26:36.153 "trsvcid": "$NVMF_PORT", 00:26:36.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.153 "hdgst": ${hdgst:-false}, 00:26:36.153 "ddgst": ${ddgst:-false} 00:26:36.153 }, 00:26:36.153 "method": "bdev_nvme_attach_controller" 00:26:36.153 } 00:26:36.153 EOF 00:26:36.153 )") 00:26:36.153 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:26:36.410 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:26:36.410 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:26:36.410 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:36.410 "params": { 00:26:36.410 "name": "Nvme1", 00:26:36.410 "trtype": "tcp", 00:26:36.410 "traddr": "10.0.0.2", 00:26:36.410 "adrfam": "ipv4", 00:26:36.410 "trsvcid": "4420", 00:26:36.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.410 "hdgst": false, 00:26:36.410 "ddgst": false 00:26:36.410 }, 00:26:36.410 "method": "bdev_nvme_attach_controller" 00:26:36.410 },{ 00:26:36.410 "params": { 00:26:36.410 "name": "Nvme2", 00:26:36.410 "trtype": "tcp", 00:26:36.410 "traddr": "10.0.0.2", 00:26:36.410 "adrfam": "ipv4", 00:26:36.410 "trsvcid": "4420", 00:26:36.410 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:36.410 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:36.410 "hdgst": false, 00:26:36.410 "ddgst": false 00:26:36.410 }, 00:26:36.410 "method": "bdev_nvme_attach_controller" 00:26:36.410 },{ 00:26:36.410 "params": { 00:26:36.410 "name": "Nvme3", 00:26:36.410 "trtype": "tcp", 00:26:36.410 "traddr": "10.0.0.2", 00:26:36.410 "adrfam": "ipv4", 00:26:36.410 "trsvcid": "4420", 00:26:36.410 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:36.410 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:36.410 "hdgst": false, 00:26:36.410 "ddgst": false 00:26:36.410 }, 00:26:36.410 "method": "bdev_nvme_attach_controller" 00:26:36.410 },{ 00:26:36.410 "params": { 00:26:36.410 "name": "Nvme4", 00:26:36.410 "trtype": "tcp", 00:26:36.410 "traddr": "10.0.0.2", 00:26:36.410 "adrfam": "ipv4", 00:26:36.410 "trsvcid": "4420", 00:26:36.410 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:36.410 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:36.410 "hdgst": false, 00:26:36.410 "ddgst": false 00:26:36.410 }, 00:26:36.410 "method": "bdev_nvme_attach_controller" 00:26:36.410 },{ 00:26:36.410 "params": { 00:26:36.410 "name": "Nvme5", 00:26:36.410 "trtype": "tcp", 00:26:36.410 "traddr": "10.0.0.2", 00:26:36.410 "adrfam": "ipv4", 00:26:36.410 "trsvcid": "4420", 00:26:36.410 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:36.410 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:36.410 "hdgst": false, 00:26:36.410 "ddgst": false 00:26:36.410 }, 00:26:36.410 "method": "bdev_nvme_attach_controller" 00:26:36.410 },{ 00:26:36.410 "params": { 00:26:36.410 "name": "Nvme6", 00:26:36.410 "trtype": "tcp", 00:26:36.410 "traddr": "10.0.0.2", 00:26:36.410 "adrfam": "ipv4", 00:26:36.410 "trsvcid": "4420", 00:26:36.410 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:36.410 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:36.410 "hdgst": false, 00:26:36.410 "ddgst": false 00:26:36.410 }, 00:26:36.410 "method": "bdev_nvme_attach_controller" 00:26:36.410 },{ 00:26:36.410 "params": { 00:26:36.410 "name": "Nvme7", 00:26:36.410 "trtype": "tcp", 00:26:36.410 "traddr": "10.0.0.2", 00:26:36.410 "adrfam": "ipv4", 00:26:36.410 "trsvcid": "4420", 00:26:36.410 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:36.410 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:36.410 "hdgst": false, 00:26:36.410 "ddgst": false 00:26:36.410 }, 00:26:36.410 "method": "bdev_nvme_attach_controller" 00:26:36.410 },{ 00:26:36.410 "params": { 00:26:36.410 "name": "Nvme8", 00:26:36.411 "trtype": "tcp", 00:26:36.411 "traddr": "10.0.0.2", 00:26:36.411 "adrfam": "ipv4", 00:26:36.411 "trsvcid": "4420", 00:26:36.411 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:36.411 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:36.411 "hdgst": false, 00:26:36.411 "ddgst": false 00:26:36.411 }, 00:26:36.411 "method": "bdev_nvme_attach_controller" 00:26:36.411 },{ 00:26:36.411 "params": { 00:26:36.411 "name": "Nvme9", 00:26:36.411 "trtype": "tcp", 00:26:36.411 "traddr": "10.0.0.2", 00:26:36.411 "adrfam": "ipv4", 00:26:36.411 "trsvcid": "4420", 00:26:36.411 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:36.411 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:36.411 "hdgst": false, 00:26:36.411 "ddgst": false 00:26:36.411 }, 00:26:36.411 "method": "bdev_nvme_attach_controller" 00:26:36.411 },{ 00:26:36.411 "params": { 00:26:36.411 "name": "Nvme10", 00:26:36.411 "trtype": "tcp", 00:26:36.411 "traddr": "10.0.0.2", 00:26:36.411 "adrfam": "ipv4", 00:26:36.411 "trsvcid": "4420", 00:26:36.411 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:36.411 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:36.411 "hdgst": false, 00:26:36.411 "ddgst": false 00:26:36.411 }, 00:26:36.411 "method": "bdev_nvme_attach_controller" 00:26:36.411 }' 00:26:36.411 [2024-10-08 18:37:04.715782] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:26:36.411 [2024-10-08 18:37:04.715869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268160 ] 00:26:36.411 [2024-10-08 18:37:04.827970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.411 [2024-10-08 18:37:04.942210] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.308 Running I/O for 1 seconds... 00:26:39.132 1733.00 IOPS, 108.31 MiB/s 00:26:39.132 Latency(us) 00:26:39.132 [2024-10-08T16:37:07.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.132 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.132 Verification LBA range: start 0x0 length 0x400 00:26:39.132 Nvme1n1 : 1.16 225.75 14.11 0.00 0.00 277591.23 15049.01 253211.69 00:26:39.132 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.132 Verification LBA range: start 0x0 length 0x400 00:26:39.132 Nvme2n1 : 1.16 220.19 13.76 0.00 0.00 280249.84 19029.71 273406.48 00:26:39.132 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.132 Verification LBA range: start 0x0 length 0x400 00:26:39.132 Nvme3n1 : 1.15 223.52 13.97 0.00 0.00 273465.65 32234.00 265639.25 00:26:39.132 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.132 Verification LBA range: start 0x0 length 0x400 00:26:39.132 Nvme4n1 : 1.15 222.71 13.92 0.00 0.00 269401.88 18641.35 267192.70 00:26:39.132 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.132 Verification LBA range: start 0x0 length 0x400 00:26:39.132 Nvme5n1 : 1.19 215.34 13.46 0.00 0.00 274685.72 21651.15 288940.94 00:26:39.132 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.132 Verification LBA range: start 0x0 length 0x400 00:26:39.132 Nvme6n1 : 1.18 216.68 13.54 0.00 0.00 268190.53 20971.52 271853.04 00:26:39.132 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.132 Verification LBA range: start 0x0 length 0x400 00:26:39.132 Nvme7n1 : 1.17 221.32 13.83 0.00 0.00 257048.81 3737.98 251658.24 00:26:39.132 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.132 Verification LBA range: start 0x0 length 0x400 00:26:39.132 Nvme8n1 : 1.17 221.73 13.86 0.00 0.00 251749.82 4126.34 245444.46 00:26:39.132 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.132 Verification LBA range: start 0x0 length 0x400 00:26:39.132 Nvme9n1 : 1.20 213.92 13.37 0.00 0.00 258043.64 21165.70 293601.28 00:26:39.132 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:39.132 Verification LBA range: start 0x0 length 0x400 00:26:39.132 Nvme10n1 : 1.19 214.68 13.42 0.00 0.00 252515.93 20971.52 285834.05 00:26:39.132 [2024-10-08T16:37:07.669Z] =================================================================================================================== 00:26:39.132 [2024-10-08T16:37:07.669Z] Total : 2195.83 137.24 0.00 0.00 266279.27 3737.98 293601.28 00:26:39.390 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:26:39.390 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:39.390 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:39.390 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:39.390 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:39.390 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:39.390 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:26:39.390 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.390 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:26:39.390 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.390 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.390 rmmod nvme_tcp 00:26:39.390 rmmod nvme_fabrics 00:26:39.647 rmmod nvme_keyring 00:26:39.647 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.647 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:26:39.647 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:26:39.647 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1267543 ']' 00:26:39.647 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1267543 00:26:39.647 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1267543 ']' 00:26:39.647 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1267543 00:26:39.648 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:26:39.648 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:39.648 18:37:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1267543 00:26:39.648 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:39.648 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:39.648 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1267543' 00:26:39.648 killing process with pid 1267543 00:26:39.648 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1267543 00:26:39.648 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1267543 00:26:40.217 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:40.217 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:40.217 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:40.217 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:26:40.217 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:26:40.217 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:40.217 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:26:40.217 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:40.217 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:40.217 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.217 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.217 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:42.758 00:26:42.758 real 0m14.424s 00:26:42.758 user 0m41.711s 00:26:42.758 sys 0m4.273s 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:42.758 ************************************ 00:26:42.758 END TEST nvmf_shutdown_tc1 00:26:42.758 ************************************ 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:42.758 ************************************ 00:26:42.758 START TEST nvmf_shutdown_tc2 00:26:42.758 ************************************ 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:42.758 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:42.758 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:42.758 Found net devices under 0000:84:00.0: cvl_0_0 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:42.758 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:42.759 Found net devices under 0000:84:00.1: cvl_0_1 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:42.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:26:42.759 00:26:42.759 --- 10.0.0.2 ping statistics --- 00:26:42.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.759 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:42.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:26:42.759 00:26:42.759 --- 10.0.0.1 ping statistics --- 00:26:42.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.759 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:42.759 18:37:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:42.759 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:42.759 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:42.759 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:42.759 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.759 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1269045 00:26:42.759 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:42.759 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1269045 00:26:42.759 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1269045 ']' 00:26:42.759 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.759 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:42.759 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.759 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:42.759 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.759 [2024-10-08 18:37:11.096303] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:26:42.759 [2024-10-08 18:37:11.096400] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.759 [2024-10-08 18:37:11.215254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:43.018 [2024-10-08 18:37:11.437732] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.018 [2024-10-08 18:37:11.437844] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.018 [2024-10-08 18:37:11.437881] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.018 [2024-10-08 18:37:11.437911] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.018 [2024-10-08 18:37:11.437939] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.018 [2024-10-08 18:37:11.441381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.018 [2024-10-08 18:37:11.441480] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.018 [2024-10-08 18:37:11.441537] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.018 [2024-10-08 18:37:11.441534] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.276 [2024-10-08 18:37:11.616489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.276 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.276 Malloc1 00:26:43.276 [2024-10-08 18:37:11.714983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.276 Malloc2 00:26:43.276 Malloc3 00:26:43.535 Malloc4 00:26:43.535 Malloc5 00:26:43.535 Malloc6 00:26:43.535 Malloc7 00:26:43.535 Malloc8 00:26:43.795 Malloc9 00:26:43.795 Malloc10 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1269226 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1269226 /var/tmp/bdevperf.sock 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1269226 ']' 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:43.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:43.795 { 00:26:43.795 "params": { 00:26:43.795 "name": "Nvme$subsystem", 00:26:43.795 "trtype": "$TEST_TRANSPORT", 00:26:43.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.795 "adrfam": "ipv4", 00:26:43.795 "trsvcid": "$NVMF_PORT", 00:26:43.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.795 "hdgst": ${hdgst:-false}, 00:26:43.795 "ddgst": ${ddgst:-false} 00:26:43.795 }, 00:26:43.795 "method": "bdev_nvme_attach_controller" 00:26:43.795 } 00:26:43.795 EOF 00:26:43.795 )") 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:43.795 { 00:26:43.795 "params": { 00:26:43.795 "name": "Nvme$subsystem", 00:26:43.795 "trtype": "$TEST_TRANSPORT", 00:26:43.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.795 "adrfam": "ipv4", 00:26:43.795 "trsvcid": "$NVMF_PORT", 00:26:43.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.795 "hdgst": ${hdgst:-false}, 00:26:43.795 "ddgst": ${ddgst:-false} 00:26:43.795 }, 00:26:43.795 "method": "bdev_nvme_attach_controller" 00:26:43.795 } 00:26:43.795 EOF 00:26:43.795 )") 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:43.795 { 00:26:43.795 "params": { 00:26:43.795 "name": "Nvme$subsystem", 00:26:43.795 "trtype": "$TEST_TRANSPORT", 00:26:43.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.795 "adrfam": "ipv4", 00:26:43.795 "trsvcid": "$NVMF_PORT", 00:26:43.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.795 "hdgst": ${hdgst:-false}, 00:26:43.795 "ddgst": ${ddgst:-false} 00:26:43.795 }, 00:26:43.795 "method": "bdev_nvme_attach_controller" 00:26:43.795 } 00:26:43.795 EOF 00:26:43.795 )") 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:43.795 { 00:26:43.795 "params": { 00:26:43.795 "name": "Nvme$subsystem", 00:26:43.795 "trtype": "$TEST_TRANSPORT", 00:26:43.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.795 "adrfam": "ipv4", 00:26:43.795 "trsvcid": "$NVMF_PORT", 00:26:43.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.795 "hdgst": ${hdgst:-false}, 00:26:43.795 "ddgst": ${ddgst:-false} 00:26:43.795 }, 00:26:43.795 "method": "bdev_nvme_attach_controller" 00:26:43.795 } 00:26:43.795 EOF 00:26:43.795 )") 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:43.795 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:43.795 { 00:26:43.795 "params": { 00:26:43.795 "name": "Nvme$subsystem", 00:26:43.795 "trtype": "$TEST_TRANSPORT", 00:26:43.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.795 "adrfam": "ipv4", 00:26:43.795 "trsvcid": "$NVMF_PORT", 00:26:43.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.795 "hdgst": ${hdgst:-false}, 00:26:43.795 "ddgst": ${ddgst:-false} 00:26:43.795 }, 00:26:43.795 "method": "bdev_nvme_attach_controller" 00:26:43.796 } 00:26:43.796 EOF 00:26:43.796 )") 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:43.796 { 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme$subsystem", 00:26:43.796 "trtype": "$TEST_TRANSPORT", 00:26:43.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "$NVMF_PORT", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.796 "hdgst": ${hdgst:-false}, 00:26:43.796 "ddgst": ${ddgst:-false} 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 } 00:26:43.796 EOF 00:26:43.796 )") 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:43.796 { 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme$subsystem", 00:26:43.796 "trtype": "$TEST_TRANSPORT", 00:26:43.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "$NVMF_PORT", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.796 "hdgst": ${hdgst:-false}, 00:26:43.796 "ddgst": ${ddgst:-false} 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 } 00:26:43.796 EOF 00:26:43.796 )") 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:43.796 { 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme$subsystem", 00:26:43.796 "trtype": "$TEST_TRANSPORT", 00:26:43.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "$NVMF_PORT", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.796 "hdgst": ${hdgst:-false}, 00:26:43.796 "ddgst": ${ddgst:-false} 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 } 00:26:43.796 EOF 00:26:43.796 )") 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:43.796 { 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme$subsystem", 00:26:43.796 "trtype": "$TEST_TRANSPORT", 00:26:43.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "$NVMF_PORT", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.796 "hdgst": ${hdgst:-false}, 00:26:43.796 "ddgst": ${ddgst:-false} 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 } 00:26:43.796 EOF 00:26:43.796 )") 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:43.796 { 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme$subsystem", 00:26:43.796 "trtype": "$TEST_TRANSPORT", 00:26:43.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "$NVMF_PORT", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.796 "hdgst": ${hdgst:-false}, 00:26:43.796 "ddgst": ${ddgst:-false} 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 } 00:26:43.796 EOF 00:26:43.796 )") 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:26:43.796 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme1", 00:26:43.796 "trtype": "tcp", 00:26:43.796 "traddr": "10.0.0.2", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "4420", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:43.796 "hdgst": false, 00:26:43.796 "ddgst": false 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 },{ 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme2", 00:26:43.796 "trtype": "tcp", 00:26:43.796 "traddr": "10.0.0.2", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "4420", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:43.796 "hdgst": false, 00:26:43.796 "ddgst": false 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 },{ 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme3", 00:26:43.796 "trtype": "tcp", 00:26:43.796 "traddr": "10.0.0.2", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "4420", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:43.796 "hdgst": false, 00:26:43.796 "ddgst": false 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 },{ 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme4", 00:26:43.796 "trtype": "tcp", 00:26:43.796 "traddr": "10.0.0.2", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "4420", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:43.796 "hdgst": false, 00:26:43.796 "ddgst": false 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 },{ 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme5", 00:26:43.796 "trtype": "tcp", 00:26:43.796 "traddr": "10.0.0.2", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "4420", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:43.796 "hdgst": false, 00:26:43.796 "ddgst": false 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 },{ 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme6", 00:26:43.796 "trtype": "tcp", 00:26:43.796 "traddr": "10.0.0.2", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "4420", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:43.796 "hdgst": false, 00:26:43.796 "ddgst": false 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 },{ 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme7", 00:26:43.796 "trtype": "tcp", 00:26:43.796 "traddr": "10.0.0.2", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "4420", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:43.796 "hdgst": false, 00:26:43.796 "ddgst": false 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 },{ 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme8", 00:26:43.796 "trtype": "tcp", 00:26:43.796 "traddr": "10.0.0.2", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "4420", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:43.796 "hdgst": false, 00:26:43.796 "ddgst": false 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 },{ 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme9", 00:26:43.796 "trtype": "tcp", 00:26:43.796 "traddr": "10.0.0.2", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "4420", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:43.796 "hdgst": false, 00:26:43.796 "ddgst": false 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 },{ 00:26:43.796 "params": { 00:26:43.796 "name": "Nvme10", 00:26:43.796 "trtype": "tcp", 00:26:43.796 "traddr": "10.0.0.2", 00:26:43.796 "adrfam": "ipv4", 00:26:43.796 "trsvcid": "4420", 00:26:43.796 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:43.796 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:43.796 "hdgst": false, 00:26:43.796 "ddgst": false 00:26:43.796 }, 00:26:43.796 "method": "bdev_nvme_attach_controller" 00:26:43.796 }' 00:26:43.796 [2024-10-08 18:37:12.258417] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:26:43.796 [2024-10-08 18:37:12.258504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1269226 ] 00:26:43.796 [2024-10-08 18:37:12.327468] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.055 [2024-10-08 18:37:12.440229] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.960 Running I/O for 10 seconds... 00:26:46.218 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:46.218 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:46.218 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:46.218 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.218 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.218 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.218 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:46.219 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=136 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1269226 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1269226 ']' 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1269226 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1269226 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1269226' 00:26:46.478 killing process with pid 1269226 00:26:46.478 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1269226 00:26:46.479 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1269226 00:26:46.737 Received shutdown signal, test time was about 0.904122 seconds 00:26:46.737 00:26:46.737 Latency(us) 00:26:46.737 [2024-10-08T16:37:15.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.737 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.737 Verification LBA range: start 0x0 length 0x400 00:26:46.737 Nvme1n1 : 0.89 242.10 15.13 0.00 0.00 255319.79 6699.24 231463.44 00:26:46.737 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.737 Verification LBA range: start 0x0 length 0x400 00:26:46.737 Nvme2n1 : 0.87 220.67 13.79 0.00 0.00 279185.89 19418.07 264085.81 00:26:46.737 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.737 Verification LBA range: start 0x0 length 0x400 00:26:46.737 Nvme3n1 : 0.86 223.82 13.99 0.00 0.00 269051.83 17087.91 267192.70 00:26:46.737 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.737 Verification LBA range: start 0x0 length 0x400 00:26:46.737 Nvme4n1 : 0.85 224.94 14.06 0.00 0.00 260877.27 30874.74 260978.92 00:26:46.737 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.737 Verification LBA range: start 0x0 length 0x400 00:26:46.737 Nvme5n1 : 0.88 221.86 13.87 0.00 0.00 258279.08 2500.08 259425.47 00:26:46.737 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.737 Verification LBA range: start 0x0 length 0x400 00:26:46.737 Nvme6n1 : 0.89 214.80 13.43 0.00 0.00 261299.01 20194.80 271853.04 00:26:46.737 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.737 Verification LBA range: start 0x0 length 0x400 00:26:46.737 Nvme7n1 : 0.87 219.90 13.74 0.00 0.00 247941.50 36311.80 270299.59 00:26:46.737 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.737 Verification LBA range: start 0x0 length 0x400 00:26:46.737 Nvme8n1 : 0.89 216.63 13.54 0.00 0.00 245969.60 22136.60 250104.79 00:26:46.737 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.737 Verification LBA range: start 0x0 length 0x400 00:26:46.737 Nvme9n1 : 0.90 213.53 13.35 0.00 0.00 244293.78 19223.89 278066.82 00:26:46.737 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:46.737 Verification LBA range: start 0x0 length 0x400 00:26:46.737 Nvme10n1 : 0.90 212.55 13.28 0.00 0.00 239869.35 20680.25 295154.73 00:26:46.737 [2024-10-08T16:37:15.274Z] =================================================================================================================== 00:26:46.737 [2024-10-08T16:37:15.274Z] Total : 2210.81 138.18 0.00 0.00 256202.46 2500.08 295154.73 00:26:46.997 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1269045 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:47.935 rmmod nvme_tcp 00:26:47.935 rmmod nvme_fabrics 00:26:47.935 rmmod nvme_keyring 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1269045 ']' 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1269045 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1269045 ']' 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1269045 00:26:47.935 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:26:47.936 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:47.936 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1269045 00:26:48.195 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:48.195 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:48.195 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1269045' 00:26:48.195 killing process with pid 1269045 00:26:48.195 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1269045 00:26:48.195 18:37:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1269045 00:26:48.763 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:48.763 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:48.763 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:48.763 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:26:48.763 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:26:48.763 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:48.763 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:26:48.763 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:48.763 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:48.763 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.763 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.763 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:51.295 00:26:51.295 real 0m8.403s 00:26:51.295 user 0m25.585s 00:26:51.295 sys 0m1.684s 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.295 ************************************ 00:26:51.295 END TEST nvmf_shutdown_tc2 00:26:51.295 ************************************ 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:51.295 ************************************ 00:26:51.295 START TEST nvmf_shutdown_tc3 00:26:51.295 ************************************ 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:51.295 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:51.296 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:51.296 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:51.296 Found net devices under 0000:84:00.0: cvl_0_0 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:51.296 Found net devices under 0000:84:00.1: cvl_0_1 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:51.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:26:51.296 00:26:51.296 --- 10.0.0.2 ping statistics --- 00:26:51.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.296 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:26:51.296 00:26:51.296 --- 10.0.0.1 ping statistics --- 00:26:51.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.296 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1270132 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1270132 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1270132 ']' 00:26:51.296 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.297 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:51.297 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.297 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:51.297 18:37:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:51.297 [2024-10-08 18:37:19.569822] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:26:51.297 [2024-10-08 18:37:19.569916] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.297 [2024-10-08 18:37:19.681943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.555 [2024-10-08 18:37:19.909079] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.555 [2024-10-08 18:37:19.909196] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.555 [2024-10-08 18:37:19.909232] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.555 [2024-10-08 18:37:19.909261] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.555 [2024-10-08 18:37:19.909302] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.555 [2024-10-08 18:37:19.913039] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.555 [2024-10-08 18:37:19.913146] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.555 [2024-10-08 18:37:19.913197] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:51.555 [2024-10-08 18:37:19.913201] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.555 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:51.555 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:51.555 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:51.555 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:51.555 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:51.555 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.555 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:51.555 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.555 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:51.555 [2024-10-08 18:37:20.087587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.814 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:51.814 Malloc1 00:26:51.814 [2024-10-08 18:37:20.174071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.814 Malloc2 00:26:51.814 Malloc3 00:26:51.814 Malloc4 00:26:51.814 Malloc5 00:26:52.073 Malloc6 00:26:52.073 Malloc7 00:26:52.073 Malloc8 00:26:52.073 Malloc9 00:26:52.073 Malloc10 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1270316 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1270316 /var/tmp/bdevperf.sock 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1270316 ']' 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:52.332 { 00:26:52.332 "params": { 00:26:52.332 "name": "Nvme$subsystem", 00:26:52.332 "trtype": "$TEST_TRANSPORT", 00:26:52.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.332 "adrfam": "ipv4", 00:26:52.332 "trsvcid": "$NVMF_PORT", 00:26:52.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.332 "hdgst": ${hdgst:-false}, 00:26:52.332 "ddgst": ${ddgst:-false} 00:26:52.332 }, 00:26:52.332 "method": "bdev_nvme_attach_controller" 00:26:52.332 } 00:26:52.332 EOF 00:26:52.332 )") 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:52.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:52.332 { 00:26:52.332 "params": { 00:26:52.332 "name": "Nvme$subsystem", 00:26:52.332 "trtype": "$TEST_TRANSPORT", 00:26:52.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.332 "adrfam": "ipv4", 00:26:52.332 "trsvcid": "$NVMF_PORT", 00:26:52.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.332 "hdgst": ${hdgst:-false}, 00:26:52.332 "ddgst": ${ddgst:-false} 00:26:52.332 }, 00:26:52.332 "method": "bdev_nvme_attach_controller" 00:26:52.332 } 00:26:52.332 EOF 00:26:52.332 )") 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:52.332 { 00:26:52.332 "params": { 00:26:52.332 "name": "Nvme$subsystem", 00:26:52.332 "trtype": "$TEST_TRANSPORT", 00:26:52.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.332 "adrfam": "ipv4", 00:26:52.332 "trsvcid": "$NVMF_PORT", 00:26:52.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.332 "hdgst": ${hdgst:-false}, 00:26:52.332 "ddgst": ${ddgst:-false} 00:26:52.332 }, 00:26:52.332 "method": "bdev_nvme_attach_controller" 00:26:52.332 } 00:26:52.332 EOF 00:26:52.332 )") 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:52.332 { 00:26:52.332 "params": { 00:26:52.332 "name": "Nvme$subsystem", 00:26:52.332 "trtype": "$TEST_TRANSPORT", 00:26:52.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.332 "adrfam": "ipv4", 00:26:52.332 "trsvcid": "$NVMF_PORT", 00:26:52.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.332 "hdgst": ${hdgst:-false}, 00:26:52.332 "ddgst": ${ddgst:-false} 00:26:52.332 }, 00:26:52.332 "method": "bdev_nvme_attach_controller" 00:26:52.332 } 00:26:52.332 EOF 00:26:52.332 )") 00:26:52.332 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:52.333 { 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme$subsystem", 00:26:52.333 "trtype": "$TEST_TRANSPORT", 00:26:52.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "$NVMF_PORT", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.333 "hdgst": ${hdgst:-false}, 00:26:52.333 "ddgst": ${ddgst:-false} 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 } 00:26:52.333 EOF 00:26:52.333 )") 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:52.333 { 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme$subsystem", 00:26:52.333 "trtype": "$TEST_TRANSPORT", 00:26:52.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "$NVMF_PORT", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.333 "hdgst": ${hdgst:-false}, 00:26:52.333 "ddgst": ${ddgst:-false} 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 } 00:26:52.333 EOF 00:26:52.333 )") 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:52.333 { 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme$subsystem", 00:26:52.333 "trtype": "$TEST_TRANSPORT", 00:26:52.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "$NVMF_PORT", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.333 "hdgst": ${hdgst:-false}, 00:26:52.333 "ddgst": ${ddgst:-false} 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 } 00:26:52.333 EOF 00:26:52.333 )") 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:52.333 { 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme$subsystem", 00:26:52.333 "trtype": "$TEST_TRANSPORT", 00:26:52.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "$NVMF_PORT", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.333 "hdgst": ${hdgst:-false}, 00:26:52.333 "ddgst": ${ddgst:-false} 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 } 00:26:52.333 EOF 00:26:52.333 )") 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:52.333 { 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme$subsystem", 00:26:52.333 "trtype": "$TEST_TRANSPORT", 00:26:52.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "$NVMF_PORT", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.333 "hdgst": ${hdgst:-false}, 00:26:52.333 "ddgst": ${ddgst:-false} 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 } 00:26:52.333 EOF 00:26:52.333 )") 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:52.333 { 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme$subsystem", 00:26:52.333 "trtype": "$TEST_TRANSPORT", 00:26:52.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "$NVMF_PORT", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.333 "hdgst": ${hdgst:-false}, 00:26:52.333 "ddgst": ${ddgst:-false} 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 } 00:26:52.333 EOF 00:26:52.333 )") 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:26:52.333 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme1", 00:26:52.333 "trtype": "tcp", 00:26:52.333 "traddr": "10.0.0.2", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "4420", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:52.333 "hdgst": false, 00:26:52.333 "ddgst": false 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 },{ 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme2", 00:26:52.333 "trtype": "tcp", 00:26:52.333 "traddr": "10.0.0.2", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "4420", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:52.333 "hdgst": false, 00:26:52.333 "ddgst": false 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 },{ 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme3", 00:26:52.333 "trtype": "tcp", 00:26:52.333 "traddr": "10.0.0.2", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "4420", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:52.333 "hdgst": false, 00:26:52.333 "ddgst": false 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 },{ 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme4", 00:26:52.333 "trtype": "tcp", 00:26:52.333 "traddr": "10.0.0.2", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "4420", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:52.333 "hdgst": false, 00:26:52.333 "ddgst": false 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 },{ 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme5", 00:26:52.333 "trtype": "tcp", 00:26:52.333 "traddr": "10.0.0.2", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "4420", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:52.333 "hdgst": false, 00:26:52.333 "ddgst": false 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 },{ 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme6", 00:26:52.333 "trtype": "tcp", 00:26:52.333 "traddr": "10.0.0.2", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "4420", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:52.333 "hdgst": false, 00:26:52.333 "ddgst": false 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 },{ 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme7", 00:26:52.333 "trtype": "tcp", 00:26:52.333 "traddr": "10.0.0.2", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "4420", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:52.333 "hdgst": false, 00:26:52.333 "ddgst": false 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 },{ 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme8", 00:26:52.333 "trtype": "tcp", 00:26:52.333 "traddr": "10.0.0.2", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "4420", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:52.333 "hdgst": false, 00:26:52.333 "ddgst": false 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 },{ 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme9", 00:26:52.333 "trtype": "tcp", 00:26:52.333 "traddr": "10.0.0.2", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "4420", 00:26:52.333 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:52.333 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:52.333 "hdgst": false, 00:26:52.333 "ddgst": false 00:26:52.333 }, 00:26:52.333 "method": "bdev_nvme_attach_controller" 00:26:52.333 },{ 00:26:52.333 "params": { 00:26:52.333 "name": "Nvme10", 00:26:52.333 "trtype": "tcp", 00:26:52.333 "traddr": "10.0.0.2", 00:26:52.333 "adrfam": "ipv4", 00:26:52.333 "trsvcid": "4420", 00:26:52.334 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:52.334 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:52.334 "hdgst": false, 00:26:52.334 "ddgst": false 00:26:52.334 }, 00:26:52.334 "method": "bdev_nvme_attach_controller" 00:26:52.334 }' 00:26:52.334 [2024-10-08 18:37:20.712318] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:26:52.334 [2024-10-08 18:37:20.712417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270316 ] 00:26:52.334 [2024-10-08 18:37:20.783784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.592 [2024-10-08 18:37:20.896352] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.493 Running I/O for 10 seconds... 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:54.493 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.493 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:26:54.493 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:26:54.493 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:54.751 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:54.751 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:54.751 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:54.751 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:54.751 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.751 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:54.751 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.009 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=74 00:26:55.009 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 74 -ge 100 ']' 00:26:55.009 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:55.282 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=138 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 138 -ge 100 ']' 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1270132 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1270132 ']' 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1270132 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1270132 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1270132' 00:26:55.283 killing process with pid 1270132 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1270132 00:26:55.283 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1270132 00:26:55.283 [2024-10-08 18:37:23.661187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.661736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd9a0 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.283 [2024-10-08 18:37:23.663471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.663975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1000400 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.284 [2024-10-08 18:37:23.665851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.665862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.665874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.665885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.665897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.665915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.665926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.665938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.665949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.665961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.665976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.665989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.666219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffde70 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.667994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.668585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe340 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.669803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.285 [2024-10-08 18:37:23.669844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.285 [2024-10-08 18:37:23.669862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.285 [2024-10-08 18:37:23.669876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.285 [2024-10-08 18:37:23.669890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.285 [2024-10-08 18:37:23.669903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.285 [2024-10-08 18:37:23.669917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.285 [2024-10-08 18:37:23.669930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.285 [2024-10-08 18:37:23.669943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c0960 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.669998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.285 [2024-10-08 18:37:23.670019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.285 [2024-10-08 18:37:23.670034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.285 [2024-10-08 18:37:23.670056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.285 [2024-10-08 18:37:23.670071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.285 [2024-10-08 18:37:23.670084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.285 [2024-10-08 18:37:23.670097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.285 [2024-10-08 18:37:23.670109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.285 [2024-10-08 18:37:23.670122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193f3d0 is same with the state(6) to be set 00:26:55.285 [2024-10-08 18:37:23.670173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.285 [2024-10-08 18:37:23.670194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.285 [2024-10-08 18:37:23.670209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.286 [2024-10-08 18:37:23.670222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.286 [2024-10-08 18:37:23.670236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.286 [2024-10-08 18:37:23.670249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.286 [2024-10-08 18:37:23.670263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.286 [2024-10-08 18:37:23.670276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.286 [2024-10-08 18:37:23.670288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf1e0 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.286 [2024-10-08 18:37:23.670383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.286 [2024-10-08 18:37:23.670398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.286 [2024-10-08 18:37:23.670411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.286 [2024-10-08 18:37:23.670425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.286 [2024-10-08 18:37:23.670437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.286 [2024-10-08 18:37:23.670451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.286 [2024-10-08 18:37:23.670463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.286 [2024-10-08 18:37:23.670475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9930 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.286 [2024-10-08 18:37:23.670585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.286 [2024-10-08 18:37:23.670597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-10-08 18:37:23.670611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with tid:0 cdw10:00000000 cdw11:00000000 00:26:55.286 he state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-08 18:37:23.670626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.286 he state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with t[2024-10-08 18:37:23.670641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(6) to be set 00:26:55.286 id:0 cdw10:00000000 cdw11:00000000 00:26:55.286 [2024-10-08 18:37:23.670666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with t[2024-10-08 18:37:23.670667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:26:55.286 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.286 [2024-10-08 18:37:23.670682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.286 [2024-10-08 18:37:23.670695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.286 [2024-10-08 18:37:23.670707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9db0 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.670994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.671245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffe830 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.286 [2024-10-08 18:37:23.672321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffed00 is same with the state(6) to be set 00:26:55.287 [2024-10-08 18:37:23.672848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.672875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.672901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.672916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.672933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.672947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.672962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.672975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.672990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.287 [2024-10-08 18:37:23.673394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.287 [2024-10-08 18:37:23.673415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.673971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.673986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff1f0 is same with t[2024-10-08 18:37:23.674028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:55.288 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-10-08 18:37:23.674049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff1f0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 he state(6) to be set 00:26:55.288 [2024-10-08 18:37:23.674065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff1f0 is same with the state(6) to be set 00:26:55.288 [2024-10-08 18:37:23.674080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff1f0 is same with t[2024-10-08 18:37:23.674080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1he state(6) to be set 00:26:55.288 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.288 [2024-10-08 18:37:23.674580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.288 [2024-10-08 18:37:23.674597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.674610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.674626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.674639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.674661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.674676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.674700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.674715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.674730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.674743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.674758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.674771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.674786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.674800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.674926] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16cdeb0 was disconnected and freed. reset controller. 00:26:55.289 [2024-10-08 18:37:23.675061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with t[2024-10-08 18:37:23.675397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:55.289 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12[2024-10-08 18:37:23.675414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 he state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:37:23.675430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 he state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12[2024-10-08 18:37:23.675485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 he state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:37:23.675498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 he state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:12[2024-10-08 18:37:23.675615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 he state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:37:23.675630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 he state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with t[2024-10-08 18:37:23.675675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:55.289 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 [2024-10-08 18:37:23.675739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 [2024-10-08 18:37:23.675751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:12[2024-10-08 18:37:23.675764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.289 he state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:37:23.675778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.289 he state(6) to be set 00:26:55.289 [2024-10-08 18:37:23.675792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.675804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.675816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.675828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.675841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with t[2024-10-08 18:37:23.675853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:12he state(6) to be set 00:26:55.290 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.675874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.675878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:12[2024-10-08 18:37:23.675891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 he state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:37:23.675907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 he state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.675940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.675952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.675965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.675977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.675994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:12[2024-10-08 18:37:23.676054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 he state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:37:23.676073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 he state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with t[2024-10-08 18:37:23.676124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:12he state(6) to be set 00:26:55.290 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with t[2024-10-08 18:37:23.676140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:55.290 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with t[2024-10-08 18:37:23.676244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:12he state(6) to be set 00:26:55.290 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfff570 is same with the state(6) to be set 00:26:55.290 [2024-10-08 18:37:23.676281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.290 [2024-10-08 18:37:23.676545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.290 [2024-10-08 18:37:23.676558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.676588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.676621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.676656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.676687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.676723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.676752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.676780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.676808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.676836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.676864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.676892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.676929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.676957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.676972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.677001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.677018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.677031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.677046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.677060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.677075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.677089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.677205] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18bb7d0 was disconnected and freed. reset controller. 00:26:55.291 [2024-10-08 18:37:23.677351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12[2024-10-08 18:37:23.677579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 he state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.677609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with t[2024-10-08 18:37:23.677622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:12he state(6) to be set 00:26:55.291 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.677636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.677660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.677676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.677689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.677701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.677713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with t[2024-10-08 18:37:23.677727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:1he state(6) to be set 00:26:55.291 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.291 [2024-10-08 18:37:23.677741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.291 [2024-10-08 18:37:23.677743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.291 [2024-10-08 18:37:23.677754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.677766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.677778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.677800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.677814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.677826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.677838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:1[2024-10-08 18:37:23.677850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 he state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:37:23.677864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 he state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.677893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.677905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.677917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.677929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with t[2024-10-08 18:37:23.677941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:1he state(6) to be set 00:26:55.292 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.677954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.677966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.677982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.677985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.677995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.678001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.678015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.678030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1[2024-10-08 18:37:23.678032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 he state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.678046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with t[2024-10-08 18:37:23.678046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:55.292 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.678064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.678078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.678093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.678107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.678122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with t[2024-10-08 18:37:23.678122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1he state(6) to be set 00:26:55.292 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.678138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.678154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.678171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfffa40 is same with the state(6) to be set 00:26:55.292 [2024-10-08 18:37:23.678187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.292 [2024-10-08 18:37:23.678511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.292 [2024-10-08 18:37:23.678526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.678919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.678934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.678948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.678963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.678967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.678977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.678979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.678992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.678998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.679004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.679016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with t[2024-10-08 18:37:23.679028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1he state(6) to be set 00:26:55.293 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.679041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.679053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1[2024-10-08 18:37:23.679064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 he state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with t[2024-10-08 18:37:23.679079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:55.293 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.679092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.679105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.679117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.679129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.679142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with t[2024-10-08 18:37:23.679154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1he state(6) to be set 00:26:55.293 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.679168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.679180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.679192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.679204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1[2024-10-08 18:37:23.679216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 he state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with t[2024-10-08 18:37:23.679230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:55.293 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.679245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with t[2024-10-08 18:37:23.679248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1he state(6) to be set 00:26:55.293 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.679263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with t[2024-10-08 18:37:23.679264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:55.293 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.679278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.293 [2024-10-08 18:37:23.679291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with t[2024-10-08 18:37:23.679290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1he state(6) to be set 00:26:55.293 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.293 [2024-10-08 18:37:23.679306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with t[2024-10-08 18:37:23.679307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:26:55.293 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.293 [2024-10-08 18:37:23.679320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-10-08 18:37:23.679332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.679345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-10-08 18:37:23.679358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.679370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-10-08 18:37:23.679394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.679406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-10-08 18:37:23.679418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.679431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-10-08 18:37:23.679446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.679459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with t[2024-10-08 18:37:23.679471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128he state(6) to be set 00:26:55.294 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-10-08 18:37:23.679485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.679497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.294 [2024-10-08 18:37:23.679509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.679521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with t[2024-10-08 18:37:23.679554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devihe state(6) to be set 00:26:55.294 ce or address) on qpair id 1 00:26:55.294 [2024-10-08 18:37:23.679571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679622] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18cb5d0 was disconnected and freed. reset controller. 00:26:55.294 [2024-10-08 18:37:23.679630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffff10 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.679958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c0960 (9): Bad file descriptor 00:26:55.294 [2024-10-08 18:37:23.679995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193f3d0 (9): Bad file descriptor 00:26:55.294 [2024-10-08 18:37:23.680027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bf1e0 (9): Bad file descriptor 00:26:55.294 [2024-10-08 18:37:23.680054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c9930 (9): Bad file descriptor 00:26:55.294 [2024-10-08 18:37:23.680104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.680140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.680167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.680193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.680218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19322c0 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.680266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.680301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.680327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.680353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.680378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ed4a0 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.680424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.680464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.680491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.680517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.680542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14321e0 is same with the state(6) to be set 00:26:55.294 [2024-10-08 18:37:23.680580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.294 [2024-10-08 18:37:23.680613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.294 [2024-10-08 18:37:23.680626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.680639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.295 [2024-10-08 18:37:23.680660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.680675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.295 [2024-10-08 18:37:23.680688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.680701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4b50 is same with the state(6) to be set 00:26:55.295 [2024-10-08 18:37:23.680747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.295 [2024-10-08 18:37:23.680767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.680782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.295 [2024-10-08 18:37:23.680795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.680808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.295 [2024-10-08 18:37:23.680820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.680833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:55.295 [2024-10-08 18:37:23.680845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.680862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fa660 is same with the state(6) to be set 00:26:55.295 [2024-10-08 18:37:23.680892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c9db0 (9): Bad file descriptor 00:26:55.295 [2024-10-08 18:37:23.684873] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.295 [2024-10-08 18:37:23.684910] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:55.295 [2024-10-08 18:37:23.684929] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:55.295 [2024-10-08 18:37:23.684952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14321e0 (9): Bad file descriptor 00:26:55.295 [2024-10-08 18:37:23.685508] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:55.295 [2024-10-08 18:37:23.685674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.295 [2024-10-08 18:37:23.685703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c9db0 with addr=10.0.0.2, port=4420 00:26:55.295 [2024-10-08 18:37:23.685720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9db0 is same with the state(6) to be set 00:26:55.295 [2024-10-08 18:37:23.685818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.295 [2024-10-08 18:37:23.685844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14bf1e0 with addr=10.0.0.2, port=4420 00:26:55.295 [2024-10-08 18:37:23.685860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf1e0 is same with the state(6) to be set 00:26:55.295 [2024-10-08 18:37:23.686211] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:55.295 [2024-10-08 18:37:23.686581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.686606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.686630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.686646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.686672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.686687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.686704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.686718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.686734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.686747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.686763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.686777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.686793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.686806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.686830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.686844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.686860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.686873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.686889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.686902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.686917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.686930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.686946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.686959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.686975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.686988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.687003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.687017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.687032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.687046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.687061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.687075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.687090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.687104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.687120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.687134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.687149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.687163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.687178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.687196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.295 [2024-10-08 18:37:23.687211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.295 [2024-10-08 18:37:23.687225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.687976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.687990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.296 [2024-10-08 18:37:23.688397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.296 [2024-10-08 18:37:23.688412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.688425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.688440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.688454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.688469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.688483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.688584] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18ca090 was disconnected and freed. reset controller. 00:26:55.297 [2024-10-08 18:37:23.689229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.297 [2024-10-08 18:37:23.689258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14321e0 with addr=10.0.0.2, port=4420 00:26:55.297 [2024-10-08 18:37:23.689275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14321e0 is same with the state(6) to be set 00:26:55.297 [2024-10-08 18:37:23.689296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c9db0 (9): Bad file descriptor 00:26:55.297 [2024-10-08 18:37:23.689316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bf1e0 (9): Bad file descriptor 00:26:55.297 [2024-10-08 18:37:23.689437] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:55.297 [2024-10-08 18:37:23.690899] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:55.297 [2024-10-08 18:37:23.690978] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:55.297 [2024-10-08 18:37:23.691077] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:55.297 [2024-10-08 18:37:23.691112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ed4a0 (9): Bad file descriptor 00:26:55.297 [2024-10-08 18:37:23.691135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14321e0 (9): Bad file descriptor 00:26:55.297 [2024-10-08 18:37:23.691153] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.297 [2024-10-08 18:37:23.691167] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.297 [2024-10-08 18:37:23.691189] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.297 [2024-10-08 18:37:23.691210] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:55.297 [2024-10-08 18:37:23.691225] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:55.297 [2024-10-08 18:37:23.691238] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:55.297 [2024-10-08 18:37:23.691290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19322c0 (9): Bad file descriptor 00:26:55.297 [2024-10-08 18:37:23.691327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4b50 (9): Bad file descriptor 00:26:55.297 [2024-10-08 18:37:23.691359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fa660 (9): Bad file descriptor 00:26:55.297 [2024-10-08 18:37:23.691438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.691970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.691984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.692001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.692015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.692030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.692043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.692059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.692073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.692088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.692101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.692120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.692134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.692150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.692163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.692178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.692191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.692206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.692220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.297 [2024-10-08 18:37:23.692235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.297 [2024-10-08 18:37:23.692249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.692979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.692993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.693009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.693023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.693038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.693052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.693067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.693081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.693096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.693110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.693125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.693139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.693154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.693167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.693183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.693197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.693212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.693236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.693250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c8b90 is same with the state(6) to be set 00:26:55.298 [2024-10-08 18:37:23.693351] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18c8b90 was disconnected and freed. reset controller. 00:26:55.298 [2024-10-08 18:37:23.693370] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:55.298 [2024-10-08 18:37:23.693504] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:55.298 [2024-10-08 18:37:23.693541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.298 [2024-10-08 18:37:23.693559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.298 [2024-10-08 18:37:23.693600] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:55.298 [2024-10-08 18:37:23.693618] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:55.298 [2024-10-08 18:37:23.693631] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:55.298 [2024-10-08 18:37:23.693734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.693757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.693777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.693792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.693808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.298 [2024-10-08 18:37:23.693823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.298 [2024-10-08 18:37:23.693839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.693853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.693869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.693883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.693899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.693912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.693928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.693941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.693956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.693970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.693985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.694981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.694997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.695011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.695026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.695040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.299 [2024-10-08 18:37:23.695056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.299 [2024-10-08 18:37:23.695070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.695552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.695567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.701341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.701400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.701417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.701434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.701448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.701464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cf090 is same with the state(6) to be set 00:26:55.300 [2024-10-08 18:37:23.704301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.300 [2024-10-08 18:37:23.704772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.300 [2024-10-08 18:37:23.704785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.704801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.704814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.704830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.704844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.704860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.704873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.704888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.704902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.704922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.704936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.704951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.704965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.704980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.704994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.705970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.705986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.706000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.706016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.706030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.301 [2024-10-08 18:37:23.706045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.301 [2024-10-08 18:37:23.706062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.706079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.706093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.706108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.706122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.706138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.706152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.706168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.706182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.706199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.706212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.706228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.706241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.706257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.706270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.706285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d0a70 is same with the state(6) to be set 00:26:55.302 [2024-10-08 18:37:23.707491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.302 [2024-10-08 18:37:23.707520] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:55.302 [2024-10-08 18:37:23.707542] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:55.302 [2024-10-08 18:37:23.707559] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:55.302 [2024-10-08 18:37:23.707746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.302 [2024-10-08 18:37:23.707776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ed4a0 with addr=10.0.0.2, port=4420 00:26:55.302 [2024-10-08 18:37:23.707793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ed4a0 is same with the state(6) to be set 00:26:55.302 [2024-10-08 18:37:23.707891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ed4a0 (9): Bad file descriptor 00:26:55.302 [2024-10-08 18:37:23.708160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.302 [2024-10-08 18:37:23.708191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c0960 with addr=10.0.0.2, port=4420 00:26:55.302 [2024-10-08 18:37:23.708208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c0960 is same with the state(6) to be set 00:26:55.302 [2024-10-08 18:37:23.708325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.302 [2024-10-08 18:37:23.708356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c9930 with addr=10.0.0.2, port=4420 00:26:55.302 [2024-10-08 18:37:23.708373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9930 is same with the state(6) to be set 00:26:55.302 [2024-10-08 18:37:23.708516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.302 [2024-10-08 18:37:23.708541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x193f3d0 with addr=10.0.0.2, port=4420 00:26:55.302 [2024-10-08 18:37:23.708556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193f3d0 is same with the state(6) to be set 00:26:55.302 [2024-10-08 18:37:23.709171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.302 [2024-10-08 18:37:23.709933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.302 [2024-10-08 18:37:23.709949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.709962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.709978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.709991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.710978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.710995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.711011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.711024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.711039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.711053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.711068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.711081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.711095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ccb10 is same with the state(6) to be set 00:26:55.303 [2024-10-08 18:37:23.712353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.712375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.303 [2024-10-08 18:37:23.712395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.303 [2024-10-08 18:37:23.712410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.712978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.712998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.304 [2024-10-08 18:37:23.713422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.304 [2024-10-08 18:37:23.713437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.713971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.713985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.714000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.714014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.714029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.714042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.714058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.714071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.714086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.714103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.714119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.714132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.714147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.714161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.714176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.714190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.714205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.714218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.714233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.714247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.714260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce050 is same with the state(6) to be set 00:26:55.305 [2024-10-08 18:37:23.715507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.715529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.715549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.715564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.715580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.715594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.715609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.715622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.715638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.715658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.715676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.715689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.715705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.715723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.715739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.715752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.715767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.715781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.715797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.715811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.715826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.305 [2024-10-08 18:37:23.715840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.305 [2024-10-08 18:37:23.715856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.715870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.715885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.715899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.715915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.715928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.715944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.715957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.715972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.715986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.716972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.716987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.717000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.717016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.717030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.717045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.717058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.717074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.306 [2024-10-08 18:37:23.717087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.306 [2024-10-08 18:37:23.717102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.307 [2024-10-08 18:37:23.717116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.307 [2024-10-08 18:37:23.717131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.307 [2024-10-08 18:37:23.717144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.307 [2024-10-08 18:37:23.717159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.307 [2024-10-08 18:37:23.717172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.307 [2024-10-08 18:37:23.717188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.307 [2024-10-08 18:37:23.717201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.307 [2024-10-08 18:37:23.717220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.307 [2024-10-08 18:37:23.717234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.307 [2024-10-08 18:37:23.717249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.307 [2024-10-08 18:37:23.717263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.307 [2024-10-08 18:37:23.717278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.307 [2024-10-08 18:37:23.717291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.307 [2024-10-08 18:37:23.717306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.307 [2024-10-08 18:37:23.717319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.307 [2024-10-08 18:37:23.717334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.307 [2024-10-08 18:37:23.717347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.307 [2024-10-08 18:37:23.717362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.307 [2024-10-08 18:37:23.717375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.307 [2024-10-08 18:37:23.717390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.307 [2024-10-08 18:37:23.717403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.307 [2024-10-08 18:37:23.717417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf590 is same with the state(6) to be set 00:26:55.307 [2024-10-08 18:37:23.718932] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:55.307 [2024-10-08 18:37:23.718963] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.307 [2024-10-08 18:37:23.718983] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:55.307 [2024-10-08 18:37:23.719001] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:55.307 [2024-10-08 18:37:23.719018] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:55.307 task offset: 25472 on job bdev=Nvme1n1 fails 00:26:55.307 00:26:55.307 Latency(us) 00:26:55.307 [2024-10-08T16:37:23.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.307 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.307 Job: Nvme1n1 ended in about 0.95 seconds with error 00:26:55.307 Verification LBA range: start 0x0 length 0x400 00:26:55.307 Nvme1n1 : 0.95 202.10 12.63 67.37 0.00 234854.78 30680.56 242337.56 00:26:55.307 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.307 Job: Nvme2n1 ended in about 0.97 seconds with error 00:26:55.307 Verification LBA range: start 0x0 length 0x400 00:26:55.307 Nvme2n1 : 0.97 131.87 8.24 65.94 0.00 313619.34 21554.06 267192.70 00:26:55.307 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.307 Job: Nvme3n1 ended in about 0.95 seconds with error 00:26:55.307 Verification LBA range: start 0x0 length 0x400 00:26:55.307 Nvme3n1 : 0.95 201.85 12.62 67.28 0.00 225531.92 11165.39 264085.81 00:26:55.307 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.307 Job: Nvme4n1 ended in about 0.97 seconds with error 00:26:55.307 Verification LBA range: start 0x0 length 0x400 00:26:55.307 Nvme4n1 : 0.97 201.68 12.60 61.74 0.00 225559.13 16699.54 270299.59 00:26:55.307 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.307 Job: Nvme5n1 ended in about 0.96 seconds with error 00:26:55.307 Verification LBA range: start 0x0 length 0x400 00:26:55.307 Nvme5n1 : 0.96 138.74 8.67 66.76 0.00 283036.91 6262.33 270299.59 00:26:55.307 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.307 Job: Nvme6n1 ended in about 0.95 seconds with error 00:26:55.307 Verification LBA range: start 0x0 length 0x400 00:26:55.307 Nvme6n1 : 0.95 201.55 12.60 67.18 0.00 211417.41 6213.78 268746.15 00:26:55.307 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.307 Job: Nvme7n1 ended in about 0.98 seconds with error 00:26:55.307 Verification LBA range: start 0x0 length 0x400 00:26:55.307 Nvme7n1 : 0.98 130.59 8.16 65.29 0.00 284940.01 20680.25 268746.15 00:26:55.307 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.307 Job: Nvme8n1 ended in about 0.98 seconds with error 00:26:55.307 Verification LBA range: start 0x0 length 0x400 00:26:55.307 Nvme8n1 : 0.98 133.22 8.33 65.08 0.00 275602.22 20000.62 248551.35 00:26:55.307 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.307 Job: Nvme9n1 ended in about 0.99 seconds with error 00:26:55.307 Verification LBA range: start 0x0 length 0x400 00:26:55.307 Nvme9n1 : 0.99 129.75 8.11 64.87 0.00 274914.67 21748.24 274959.93 00:26:55.307 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:55.307 Job: Nvme10n1 ended in about 0.98 seconds with error 00:26:55.307 Verification LBA range: start 0x0 length 0x400 00:26:55.307 Nvme10n1 : 0.98 131.23 8.20 65.61 0.00 265039.77 19418.07 293601.28 00:26:55.307 [2024-10-08T16:37:23.844Z] =================================================================================================================== 00:26:55.307 [2024-10-08T16:37:23.844Z] Total : 1602.56 100.16 657.13 0.00 255412.25 6213.78 293601.28 00:26:55.307 [2024-10-08 18:37:23.754100] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:55.307 [2024-10-08 18:37:23.754283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c0960 (9): Bad file descriptor 00:26:55.307 [2024-10-08 18:37:23.754316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c9930 (9): Bad file descriptor 00:26:55.307 [2024-10-08 18:37:23.754337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193f3d0 (9): Bad file descriptor 00:26:55.307 [2024-10-08 18:37:23.754354] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:55.307 [2024-10-08 18:37:23.754368] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:55.307 [2024-10-08 18:37:23.754384] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:55.307 [2024-10-08 18:37:23.754456] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:55.307 [2024-10-08 18:37:23.754487] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:55.307 [2024-10-08 18:37:23.754506] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:55.307 [2024-10-08 18:37:23.754525] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:55.307 [2024-10-08 18:37:23.754543] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:55.307 [2024-10-08 18:37:23.754703] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:55.307 [2024-10-08 18:37:23.754746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.307 [2024-10-08 18:37:23.754994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.307 [2024-10-08 18:37:23.755028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14bf1e0 with addr=10.0.0.2, port=4420 00:26:55.307 [2024-10-08 18:37:23.755047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bf1e0 is same with the state(6) to be set 00:26:55.307 [2024-10-08 18:37:23.755161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.307 [2024-10-08 18:37:23.755187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c9db0 with addr=10.0.0.2, port=4420 00:26:55.307 [2024-10-08 18:37:23.755203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c9db0 is same with the state(6) to be set 00:26:55.307 [2024-10-08 18:37:23.755320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.307 [2024-10-08 18:37:23.755345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14321e0 with addr=10.0.0.2, port=4420 00:26:55.307 [2024-10-08 18:37:23.755361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14321e0 is same with the state(6) to be set 00:26:55.307 [2024-10-08 18:37:23.755468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.307 [2024-10-08 18:37:23.755493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f4b50 with addr=10.0.0.2, port=4420 00:26:55.307 [2024-10-08 18:37:23.755509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4b50 is same with the state(6) to be set 00:26:55.307 [2024-10-08 18:37:23.755704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.307 [2024-10-08 18:37:23.755730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fa660 with addr=10.0.0.2, port=4420 00:26:55.308 [2024-10-08 18:37:23.755746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fa660 is same with the state(6) to be set 00:26:55.308 [2024-10-08 18:37:23.755760] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:55.308 [2024-10-08 18:37:23.755773] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:55.308 [2024-10-08 18:37:23.755786] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:55.308 [2024-10-08 18:37:23.755804] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:55.308 [2024-10-08 18:37:23.755819] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:55.308 [2024-10-08 18:37:23.755839] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:55.308 [2024-10-08 18:37:23.755855] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:55.308 [2024-10-08 18:37:23.755868] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:55.308 [2024-10-08 18:37:23.755881] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:55.308 [2024-10-08 18:37:23.755936] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:55.308 [2024-10-08 18:37:23.755960] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:55.308 [2024-10-08 18:37:23.755979] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:55.308 [2024-10-08 18:37:23.756894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.308 [2024-10-08 18:37:23.756923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.308 [2024-10-08 18:37:23.756937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.308 [2024-10-08 18:37:23.757108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.308 [2024-10-08 18:37:23.757135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19322c0 with addr=10.0.0.2, port=4420 00:26:55.308 [2024-10-08 18:37:23.757152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19322c0 is same with the state(6) to be set 00:26:55.308 [2024-10-08 18:37:23.757170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14bf1e0 (9): Bad file descriptor 00:26:55.308 [2024-10-08 18:37:23.757188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c9db0 (9): Bad file descriptor 00:26:55.308 [2024-10-08 18:37:23.757206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14321e0 (9): Bad file descriptor 00:26:55.308 [2024-10-08 18:37:23.757223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4b50 (9): Bad file descriptor 00:26:55.308 [2024-10-08 18:37:23.757241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fa660 (9): Bad file descriptor 00:26:55.308 [2024-10-08 18:37:23.757310] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:55.308 [2024-10-08 18:37:23.757349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19322c0 (9): Bad file descriptor 00:26:55.308 [2024-10-08 18:37:23.757368] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:55.308 [2024-10-08 18:37:23.757381] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:55.308 [2024-10-08 18:37:23.757394] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:55.308 [2024-10-08 18:37:23.757411] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.308 [2024-10-08 18:37:23.757424] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.308 [2024-10-08 18:37:23.757436] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.308 [2024-10-08 18:37:23.757453] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:55.308 [2024-10-08 18:37:23.757466] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:55.308 [2024-10-08 18:37:23.757478] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:55.308 [2024-10-08 18:37:23.757494] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:55.308 [2024-10-08 18:37:23.757506] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:55.308 [2024-10-08 18:37:23.757518] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:55.308 [2024-10-08 18:37:23.757534] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:55.308 [2024-10-08 18:37:23.757547] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:55.308 [2024-10-08 18:37:23.757559] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:55.308 [2024-10-08 18:37:23.757617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.308 [2024-10-08 18:37:23.757636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.308 [2024-10-08 18:37:23.757648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.308 [2024-10-08 18:37:23.757671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.308 [2024-10-08 18:37:23.757700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.308 [2024-10-08 18:37:23.757843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.308 [2024-10-08 18:37:23.757869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ed4a0 with addr=10.0.0.2, port=4420 00:26:55.308 [2024-10-08 18:37:23.757885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ed4a0 is same with the state(6) to be set 00:26:55.308 [2024-10-08 18:37:23.757899] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:55.308 [2024-10-08 18:37:23.757911] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:55.308 [2024-10-08 18:37:23.757923] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:55.308 [2024-10-08 18:37:23.757963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.308 [2024-10-08 18:37:23.757985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ed4a0 (9): Bad file descriptor 00:26:55.308 [2024-10-08 18:37:23.758026] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:55.308 [2024-10-08 18:37:23.758043] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:55.308 [2024-10-08 18:37:23.758056] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:55.308 [2024-10-08 18:37:23.758092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:56.243 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1270316 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1270316 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1270316 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:57.178 rmmod nvme_tcp 00:26:57.178 rmmod nvme_fabrics 00:26:57.178 rmmod nvme_keyring 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1270132 ']' 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1270132 00:26:57.178 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1270132 ']' 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1270132 00:26:57.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1270132) - No such process 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1270132 is not found' 00:26:57.179 Process with pid 1270132 is not found 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.179 18:37:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.081 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:59.081 00:26:59.081 real 0m8.259s 00:26:59.081 user 0m20.737s 00:26:59.081 sys 0m1.733s 00:26:59.081 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:59.081 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:59.081 ************************************ 00:26:59.081 END TEST nvmf_shutdown_tc3 00:26:59.081 ************************************ 00:26:59.082 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:26:59.082 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:26:59.082 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:26:59.082 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:59.082 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:59.082 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:59.340 ************************************ 00:26:59.340 START TEST nvmf_shutdown_tc4 00:26:59.340 ************************************ 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:26:59.340 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:59.341 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:59.341 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:59.341 Found net devices under 0000:84:00.0: cvl_0_0 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:59.341 Found net devices under 0000:84:00.1: cvl_0_1 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:59.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:26:59.341 00:26:59.341 --- 10.0.0.2 ping statistics --- 00:26:59.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.341 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:26:59.341 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:26:59.342 00:26:59.342 --- 10.0.0.1 ping statistics --- 00:26:59.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.342 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1271225 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1271225 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1271225 ']' 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.342 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:59.636 [2024-10-08 18:37:27.928492] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:26:59.636 [2024-10-08 18:37:27.928596] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.636 [2024-10-08 18:37:28.036609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:59.895 [2024-10-08 18:37:28.251629] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.895 [2024-10-08 18:37:28.251764] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.895 [2024-10-08 18:37:28.251801] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.895 [2024-10-08 18:37:28.251831] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.895 [2024-10-08 18:37:28.251858] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.895 [2024-10-08 18:37:28.255493] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.895 [2024-10-08 18:37:28.255596] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:59.895 [2024-10-08 18:37:28.255646] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:59.895 [2024-10-08 18:37:28.255656] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:01.272 [2024-10-08 18:37:29.432223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.272 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:01.272 Malloc1 00:27:01.272 [2024-10-08 18:37:29.525716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.272 Malloc2 00:27:01.272 Malloc3 00:27:01.272 Malloc4 00:27:01.272 Malloc5 00:27:01.272 Malloc6 00:27:01.272 Malloc7 00:27:01.544 Malloc8 00:27:01.544 Malloc9 00:27:01.544 Malloc10 00:27:01.544 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.544 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:01.544 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:01.544 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:01.544 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1271536 00:27:01.544 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:27:01.544 18:37:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:27:01.839 [2024-10-08 18:37:30.082108] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:07.114 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:07.114 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1271225 00:27:07.114 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1271225 ']' 00:27:07.114 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1271225 00:27:07.114 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:27:07.114 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:07.114 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1271225 00:27:07.114 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:07.114 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:07.114 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1271225' 00:27:07.114 killing process with pid 1271225 00:27:07.114 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1271225 00:27:07.114 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1271225 00:27:07.114 [2024-10-08 18:37:35.065823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158e570 is same with the state(6) to be set 00:27:07.114 [2024-10-08 18:37:35.065926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158e570 is same with the state(6) to be set 00:27:07.114 [2024-10-08 18:37:35.065943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158e570 is same with the state(6) to be set 00:27:07.114 [2024-10-08 18:37:35.066497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d700 is same with the state(6) to be set 00:27:07.114 [2024-10-08 18:37:35.066529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d700 is same with the state(6) to be set 00:27:07.114 [2024-10-08 18:37:35.066543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d700 is same with the state(6) to be set 00:27:07.114 [2024-10-08 18:37:35.066556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d700 is same with the state(6) to be set 00:27:07.114 [2024-10-08 18:37:35.066580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d700 is same with the state(6) to be set 00:27:07.114 [2024-10-08 18:37:35.066592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158d700 is same with the state(6) to be set 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 [2024-10-08 18:37:35.074507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fa5b0 is same with the state(6) to be set 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 [2024-10-08 18:37:35.074573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fa5b0 is same with the state(6) to be set 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 [2024-10-08 18:37:35.074606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fa5b0 is same with the state(6) to be set 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 [2024-10-08 18:37:35.074620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fa5b0 is same with the state(6) to be set 00:27:07.114 starting I/O failed: -6 00:27:07.114 [2024-10-08 18:37:35.074633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fa5b0 is same with the state(6) to be set 00:27:07.114 [2024-10-08 18:37:35.074645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fa5b0 is same with Write completed with error (sct=0, sc=8) 00:27:07.114 the state(6) to be set 00:27:07.114 [2024-10-08 18:37:35.074668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fa5b0 is same with the state(6) to be set 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 [2024-10-08 18:37:35.074771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 [2024-10-08 18:37:35.075076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f0e0 is same with the state(6) to be set 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 [2024-10-08 18:37:35.075125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f0e0 is same with Write completed with error (sct=0, sc=8) 00:27:07.114 the state(6) to be set 00:27:07.114 starting I/O failed: -6 00:27:07.114 [2024-10-08 18:37:35.075143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f0e0 is same with the state(6) to be set 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 [2024-10-08 18:37:35.075379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f5d0 is same with the state(6) to be set 00:27:07.114 starting I/O failed: -6 00:27:07.114 [2024-10-08 18:37:35.075410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f5d0 is same with the state(6) to be set 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 [2024-10-08 18:37:35.075424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f5d0 is same with the state(6) to be set 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 [2024-10-08 18:37:35.075437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f5d0 is same with the state(6) to be set 00:27:07.114 [2024-10-08 18:37:35.075450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f5d0 is same with the state(6) to be set 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 [2024-10-08 18:37:35.075462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f5d0 is same with the state(6) to be set 00:27:07.114 [2024-10-08 18:37:35.075474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f5d0 is same with the state(6) to be set 00:27:07.114 Write completed with error (sct=0, sc=8) 00:27:07.114 starting I/O failed: -6 00:27:07.114 [2024-10-08 18:37:35.075486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f5d0 is same with the state(6) to be set 00:27:07.114 [2024-10-08 18:37:35.075499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f5d0 is same with the state(6) to be set 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 [2024-10-08 18:37:35.075909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fa0e0 is same with the state(6) to be set 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 [2024-10-08 18:37:35.075940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fa0e0 is same with the state(6) to be set 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 [2024-10-08 18:37:35.075955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fa0e0 is same with the state(6) to be set 00:27:07.115 [2024-10-08 18:37:35.075986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 [2024-10-08 18:37:35.076919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f460 is same with starting I/O failed: -6 00:27:07.115 the state(6) to be set 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 [2024-10-08 18:37:35.076947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f460 is same with the state(6) to be set 00:27:07.115 [2024-10-08 18:37:35.076961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f460 is same with the state(6) to be set 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 [2024-10-08 18:37:35.076972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f460 is same with starting I/O failed: -6 00:27:07.115 the state(6) to be set 00:27:07.115 [2024-10-08 18:37:35.076986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f460 is same with the state(6) to be set 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 [2024-10-08 18:37:35.076997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f460 is same with the state(6) to be set 00:27:07.115 starting I/O failed: -6 00:27:07.115 [2024-10-08 18:37:35.077009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f460 is same with the state(6) to be set 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 [2024-10-08 18:37:35.077021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f460 is same with the state(6) to be set 00:27:07.115 starting I/O failed: -6 00:27:07.115 [2024-10-08 18:37:35.077033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f460 is same with the state(6) to be set 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 [2024-10-08 18:37:35.077045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f460 is same with the state(6) to be set 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 [2024-10-08 18:37:35.077243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 [2024-10-08 18:37:35.077559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478dc0 is same with Write completed with error (sct=0, sc=8) 00:27:07.115 the state(6) to be set 00:27:07.115 starting I/O failed: -6 00:27:07.115 [2024-10-08 18:37:35.077585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478dc0 is same with the state(6) to be set 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 [2024-10-08 18:37:35.077600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478dc0 is same with starting I/O failed: -6 00:27:07.115 the state(6) to be set 00:27:07.115 [2024-10-08 18:37:35.077613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478dc0 is same with the state(6) to be set 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 [2024-10-08 18:37:35.077624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478dc0 is same with the state(6) to be set 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 [2024-10-08 18:37:35.077637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478dc0 is same with the state(6) to be set 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 [2024-10-08 18:37:35.077648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478dc0 is same with the state(6) to be set 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.115 Write completed with error (sct=0, sc=8) 00:27:07.115 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 [2024-10-08 18:37:35.079142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.116 NVMe io qpair process completion error 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 [2024-10-08 18:37:35.080365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145dd60 is same with the state(6) to be set 00:27:07.116 [2024-10-08 18:37:35.080391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145dd60 is same with the state(6) to be set 00:27:07.116 [2024-10-08 18:37:35.080394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.116 [2024-10-08 18:37:35.080404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145dd60 is same with the state(6) to be set 00:27:07.116 [2024-10-08 18:37:35.080422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145dd60 is same with the state(6) to be set 00:27:07.116 [2024-10-08 18:37:35.080434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145dd60 is same with the state(6) to be set 00:27:07.116 [2024-10-08 18:37:35.080446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145dd60 is same with the state(6) to be set 00:27:07.116 [2024-10-08 18:37:35.080458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145dd60 is same with the state(6) to be set 00:27:07.116 [2024-10-08 18:37:35.080470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145dd60 is same with the state(6) to be set 00:27:07.116 [2024-10-08 18:37:35.080481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145dd60 is same with the state(6) to be set 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 [2024-10-08 18:37:35.081474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.116 Write completed with error (sct=0, sc=8) 00:27:07.116 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 [2024-10-08 18:37:35.082821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 [2024-10-08 18:37:35.085010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.117 NVMe io qpair process completion error 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 starting I/O failed: -6 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.117 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 [2024-10-08 18:37:35.086484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 [2024-10-08 18:37:35.087752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 [2024-10-08 18:37:35.089102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.118 starting I/O failed: -6 00:27:07.118 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 [2024-10-08 18:37:35.091904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.119 NVMe io qpair process completion error 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 [2024-10-08 18:37:35.093263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 [2024-10-08 18:37:35.094375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.119 Write completed with error (sct=0, sc=8) 00:27:07.119 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 [2024-10-08 18:37:35.095787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 [2024-10-08 18:37:35.098599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.120 NVMe io qpair process completion error 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 starting I/O failed: -6 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.120 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 [2024-10-08 18:37:35.099854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 [2024-10-08 18:37:35.101031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.121 starting I/O failed: -6 00:27:07.121 starting I/O failed: -6 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 [2024-10-08 18:37:35.102638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.121 Write completed with error (sct=0, sc=8) 00:27:07.121 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 [2024-10-08 18:37:35.106682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.122 NVMe io qpair process completion error 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 [2024-10-08 18:37:35.108273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 [2024-10-08 18:37:35.109494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.122 Write completed with error (sct=0, sc=8) 00:27:07.122 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 [2024-10-08 18:37:35.110850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 [2024-10-08 18:37:35.114860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.123 NVMe io qpair process completion error 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 starting I/O failed: -6 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.123 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.124 Write completed with error (sct=0, sc=8) 00:27:07.124 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 [2024-10-08 18:37:35.120621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.125 NVMe io qpair process completion error 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 [2024-10-08 18:37:35.121984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 [2024-10-08 18:37:35.123072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.125 starting I/O failed: -6 00:27:07.125 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 [2024-10-08 18:37:35.124464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 [2024-10-08 18:37:35.126631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.126 NVMe io qpair process completion error 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 starting I/O failed: -6 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.126 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 [2024-10-08 18:37:35.128193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 [2024-10-08 18:37:35.129265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 [2024-10-08 18:37:35.130627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.127 starting I/O failed: -6 00:27:07.127 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 [2024-10-08 18:37:35.134863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.128 NVMe io qpair process completion error 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 [2024-10-08 18:37:35.137278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 starting I/O failed: -6 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.128 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 [2024-10-08 18:37:35.138618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 Write completed with error (sct=0, sc=8) 00:27:07.129 starting I/O failed: -6 00:27:07.129 [2024-10-08 18:37:35.143785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:07.129 NVMe io qpair process completion error 00:27:07.129 Initializing NVMe Controllers 00:27:07.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:27:07.129 Controller IO queue size 128, less than required. 00:27:07.129 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:07.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:27:07.129 Controller IO queue size 128, less than required. 00:27:07.129 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:07.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:27:07.129 Controller IO queue size 128, less than required. 00:27:07.129 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:07.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:07.129 Controller IO queue size 128, less than required. 00:27:07.129 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:07.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:27:07.129 Controller IO queue size 128, less than required. 00:27:07.129 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:07.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:27:07.129 Controller IO queue size 128, less than required. 00:27:07.129 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:07.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:27:07.129 Controller IO queue size 128, less than required. 00:27:07.129 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:07.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:27:07.130 Controller IO queue size 128, less than required. 00:27:07.130 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:07.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:27:07.130 Controller IO queue size 128, less than required. 00:27:07.130 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:07.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:27:07.130 Controller IO queue size 128, less than required. 00:27:07.130 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:07.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:27:07.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:27:07.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:27:07.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:07.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:27:07.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:27:07.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:27:07.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:27:07.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:27:07.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:27:07.130 Initialization complete. Launching workers. 00:27:07.130 ======================================================== 00:27:07.130 Latency(us) 00:27:07.130 Device Information : IOPS MiB/s Average min max 00:27:07.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1714.80 73.68 74652.55 1150.62 137967.98 00:27:07.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1731.81 74.41 73941.92 886.00 166894.41 00:27:07.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1709.70 73.46 74948.71 1356.09 133586.83 00:27:07.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1712.25 73.57 73787.61 1252.94 132750.12 00:27:07.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1725.01 74.12 73256.44 849.38 128593.00 00:27:07.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1634.67 70.24 77330.31 1054.48 131398.14 00:27:07.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1679.73 72.18 75284.42 1175.93 135011.04 00:27:07.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1698.65 72.99 74480.33 1078.12 138534.97 00:27:07.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1637.22 70.35 77329.86 1237.66 144097.35 00:27:07.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1648.27 70.82 76851.96 1225.58 127644.24 00:27:07.130 ======================================================== 00:27:07.130 Total : 16892.11 725.83 75158.17 849.38 166894.41 00:27:07.130 00:27:07.130 [2024-10-08 18:37:35.148011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c4de0 is same with the state(6) to be set 00:27:07.130 [2024-10-08 18:37:35.148111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cb040 is same with the state(6) to be set 00:27:07.130 [2024-10-08 18:37:35.148169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c4ab0 is same with the state(6) to be set 00:27:07.130 [2024-10-08 18:37:35.148224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c67f0 is same with the state(6) to be set 00:27:07.130 [2024-10-08 18:37:35.148281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cad10 is same with the state(6) to be set 00:27:07.130 [2024-10-08 18:37:35.148337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c4780 is same with the state(6) to be set 00:27:07.130 [2024-10-08 18:37:35.148394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cb370 is same with the state(6) to be set 00:27:07.130 [2024-10-08 18:37:35.148451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c6bb0 is same with the state(6) to be set 00:27:07.130 [2024-10-08 18:37:35.148508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10cb6a0 is same with the state(6) to be set 00:27:07.130 [2024-10-08 18:37:35.148564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c69d0 is same with the state(6) to be set 00:27:07.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:07.388 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1271536 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1271536 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1271536 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.325 rmmod nvme_tcp 00:27:08.325 rmmod nvme_fabrics 00:27:08.325 rmmod nvme_keyring 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1271225 ']' 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1271225 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1271225 ']' 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1271225 00:27:08.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1271225) - No such process 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1271225 is not found' 00:27:08.325 Process with pid 1271225 is not found 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.325 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.856 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.856 00:27:10.856 real 0m11.279s 00:27:10.856 user 0m29.602s 00:27:10.856 sys 0m6.194s 00:27:10.856 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:10.856 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:10.856 ************************************ 00:27:10.856 END TEST nvmf_shutdown_tc4 00:27:10.856 ************************************ 00:27:10.856 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:27:10.856 00:27:10.856 real 0m42.862s 00:27:10.856 user 1m57.871s 00:27:10.856 sys 0m14.174s 00:27:10.856 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:10.856 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:10.856 ************************************ 00:27:10.856 END TEST nvmf_shutdown 00:27:10.856 ************************************ 00:27:10.856 18:37:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:10.856 00:27:10.856 real 15m54.975s 00:27:10.856 user 37m16.668s 00:27:10.856 sys 3m27.260s 00:27:10.856 18:37:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:10.856 18:37:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:10.856 ************************************ 00:27:10.856 END TEST nvmf_target_extra 00:27:10.856 ************************************ 00:27:10.856 18:37:39 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:10.856 18:37:39 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:10.856 18:37:39 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:10.856 18:37:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:10.856 ************************************ 00:27:10.856 START TEST nvmf_host 00:27:10.856 ************************************ 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:10.856 * Looking for test storage... 00:27:10.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:10.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.856 --rc genhtml_branch_coverage=1 00:27:10.856 --rc genhtml_function_coverage=1 00:27:10.856 --rc genhtml_legend=1 00:27:10.856 --rc geninfo_all_blocks=1 00:27:10.856 --rc geninfo_unexecuted_blocks=1 00:27:10.856 00:27:10.856 ' 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:10.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.856 --rc genhtml_branch_coverage=1 00:27:10.856 --rc genhtml_function_coverage=1 00:27:10.856 --rc genhtml_legend=1 00:27:10.856 --rc geninfo_all_blocks=1 00:27:10.856 --rc geninfo_unexecuted_blocks=1 00:27:10.856 00:27:10.856 ' 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:10.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.856 --rc genhtml_branch_coverage=1 00:27:10.856 --rc genhtml_function_coverage=1 00:27:10.856 --rc genhtml_legend=1 00:27:10.856 --rc geninfo_all_blocks=1 00:27:10.856 --rc geninfo_unexecuted_blocks=1 00:27:10.856 00:27:10.856 ' 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:10.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.856 --rc genhtml_branch_coverage=1 00:27:10.856 --rc genhtml_function_coverage=1 00:27:10.856 --rc genhtml_legend=1 00:27:10.856 --rc geninfo_all_blocks=1 00:27:10.856 --rc geninfo_unexecuted_blocks=1 00:27:10.856 00:27:10.856 ' 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.856 18:37:39 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:10.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:10.857 18:37:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.116 ************************************ 00:27:11.116 START TEST nvmf_multicontroller 00:27:11.116 ************************************ 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:11.116 * Looking for test storage... 00:27:11.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:11.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.116 --rc genhtml_branch_coverage=1 00:27:11.116 --rc genhtml_function_coverage=1 00:27:11.116 --rc genhtml_legend=1 00:27:11.116 --rc geninfo_all_blocks=1 00:27:11.116 --rc geninfo_unexecuted_blocks=1 00:27:11.116 00:27:11.116 ' 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:11.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.116 --rc genhtml_branch_coverage=1 00:27:11.116 --rc genhtml_function_coverage=1 00:27:11.116 --rc genhtml_legend=1 00:27:11.116 --rc geninfo_all_blocks=1 00:27:11.116 --rc geninfo_unexecuted_blocks=1 00:27:11.116 00:27:11.116 ' 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:11.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.116 --rc genhtml_branch_coverage=1 00:27:11.116 --rc genhtml_function_coverage=1 00:27:11.116 --rc genhtml_legend=1 00:27:11.116 --rc geninfo_all_blocks=1 00:27:11.116 --rc geninfo_unexecuted_blocks=1 00:27:11.116 00:27:11.116 ' 00:27:11.116 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:11.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.116 --rc genhtml_branch_coverage=1 00:27:11.116 --rc genhtml_function_coverage=1 00:27:11.116 --rc genhtml_legend=1 00:27:11.116 --rc geninfo_all_blocks=1 00:27:11.116 --rc geninfo_unexecuted_blocks=1 00:27:11.116 00:27:11.116 ' 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:11.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:27:11.117 18:37:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:14.401 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:14.401 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:14.401 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:14.402 Found net devices under 0000:84:00.0: cvl_0_0 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:14.402 Found net devices under 0000:84:00.1: cvl_0_1 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:14.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:27:14.402 00:27:14.402 --- 10.0.0.2 ping statistics --- 00:27:14.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.402 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:27:14.402 00:27:14.402 --- 10.0.0.1 ping statistics --- 00:27:14.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.402 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=1274474 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 1274474 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1274474 ']' 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:14.402 18:37:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:14.402 [2024-10-08 18:37:42.790134] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:27:14.402 [2024-10-08 18:37:42.790222] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.402 [2024-10-08 18:37:42.899149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:14.660 [2024-10-08 18:37:43.118948] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.660 [2024-10-08 18:37:43.119058] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.660 [2024-10-08 18:37:43.119094] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.660 [2024-10-08 18:37:43.119125] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.660 [2024-10-08 18:37:43.119151] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.660 [2024-10-08 18:37:43.121291] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:14.660 [2024-10-08 18:37:43.121397] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:14.660 [2024-10-08 18:37:43.121401] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.595 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:15.596 [2024-10-08 18:37:43.900853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:15.596 Malloc0 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:15.596 [2024-10-08 18:37:43.964133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:15.596 [2024-10-08 18:37:43.972004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:15.596 Malloc1 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.596 18:37:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1274625 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1274625 /var/tmp/bdevperf.sock 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1274625 ']' 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:15.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:15.596 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:16.162 NVMe0n1 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.162 1 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:16.162 request: 00:27:16.162 { 00:27:16.162 "name": "NVMe0", 00:27:16.162 "trtype": "tcp", 00:27:16.162 "traddr": "10.0.0.2", 00:27:16.162 "adrfam": "ipv4", 00:27:16.162 "trsvcid": "4420", 00:27:16.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:16.162 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:16.162 "hostaddr": "10.0.0.1", 00:27:16.162 "prchk_reftag": false, 00:27:16.162 "prchk_guard": false, 00:27:16.162 "hdgst": false, 00:27:16.162 "ddgst": false, 00:27:16.162 "allow_unrecognized_csi": false, 00:27:16.162 "method": "bdev_nvme_attach_controller", 00:27:16.162 "req_id": 1 00:27:16.162 } 00:27:16.162 Got JSON-RPC error response 00:27:16.162 response: 00:27:16.162 { 00:27:16.162 "code": -114, 00:27:16.162 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:16.162 } 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:16.162 request: 00:27:16.162 { 00:27:16.162 "name": "NVMe0", 00:27:16.162 "trtype": "tcp", 00:27:16.162 "traddr": "10.0.0.2", 00:27:16.162 "adrfam": "ipv4", 00:27:16.162 "trsvcid": "4420", 00:27:16.162 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:16.162 "hostaddr": "10.0.0.1", 00:27:16.162 "prchk_reftag": false, 00:27:16.162 "prchk_guard": false, 00:27:16.162 "hdgst": false, 00:27:16.162 "ddgst": false, 00:27:16.162 "allow_unrecognized_csi": false, 00:27:16.162 "method": "bdev_nvme_attach_controller", 00:27:16.162 "req_id": 1 00:27:16.162 } 00:27:16.162 Got JSON-RPC error response 00:27:16.162 response: 00:27:16.162 { 00:27:16.162 "code": -114, 00:27:16.162 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:16.162 } 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:16.162 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:16.163 request: 00:27:16.163 { 00:27:16.163 "name": "NVMe0", 00:27:16.163 "trtype": "tcp", 00:27:16.163 "traddr": "10.0.0.2", 00:27:16.163 "adrfam": "ipv4", 00:27:16.163 "trsvcid": "4420", 00:27:16.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:16.163 "hostaddr": "10.0.0.1", 00:27:16.163 "prchk_reftag": false, 00:27:16.163 "prchk_guard": false, 00:27:16.163 "hdgst": false, 00:27:16.163 "ddgst": false, 00:27:16.163 "multipath": "disable", 00:27:16.163 "allow_unrecognized_csi": false, 00:27:16.163 "method": "bdev_nvme_attach_controller", 00:27:16.163 "req_id": 1 00:27:16.163 } 00:27:16.163 Got JSON-RPC error response 00:27:16.163 response: 00:27:16.163 { 00:27:16.163 "code": -114, 00:27:16.163 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:27:16.163 } 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:16.163 request: 00:27:16.163 { 00:27:16.163 "name": "NVMe0", 00:27:16.163 "trtype": "tcp", 00:27:16.163 "traddr": "10.0.0.2", 00:27:16.163 "adrfam": "ipv4", 00:27:16.163 "trsvcid": "4420", 00:27:16.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:16.163 "hostaddr": "10.0.0.1", 00:27:16.163 "prchk_reftag": false, 00:27:16.163 "prchk_guard": false, 00:27:16.163 "hdgst": false, 00:27:16.163 "ddgst": false, 00:27:16.163 "multipath": "failover", 00:27:16.163 "allow_unrecognized_csi": false, 00:27:16.163 "method": "bdev_nvme_attach_controller", 00:27:16.163 "req_id": 1 00:27:16.163 } 00:27:16.163 Got JSON-RPC error response 00:27:16.163 response: 00:27:16.163 { 00:27:16.163 "code": -114, 00:27:16.163 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:16.163 } 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.163 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:16.421 NVMe0n1 00:27:16.421 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.421 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:16.421 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.421 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:16.421 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.421 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:16.421 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.421 18:37:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:16.679 00:27:16.679 18:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.679 18:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:16.679 18:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:16.679 18:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.679 18:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:16.679 18:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.679 18:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:16.679 18:37:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:18.053 { 00:27:18.053 "results": [ 00:27:18.053 { 00:27:18.053 "job": "NVMe0n1", 00:27:18.053 "core_mask": "0x1", 00:27:18.053 "workload": "write", 00:27:18.053 "status": "finished", 00:27:18.053 "queue_depth": 128, 00:27:18.053 "io_size": 4096, 00:27:18.053 "runtime": 1.004102, 00:27:18.053 "iops": 18471.231010395357, 00:27:18.053 "mibps": 72.15324613435686, 00:27:18.053 "io_failed": 0, 00:27:18.053 "io_timeout": 0, 00:27:18.053 "avg_latency_us": 6919.800487969503, 00:27:18.053 "min_latency_us": 4247.7037037037035, 00:27:18.054 "max_latency_us": 15437.368888888888 00:27:18.054 } 00:27:18.054 ], 00:27:18.054 "core_count": 1 00:27:18.054 } 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1274625 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1274625 ']' 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1274625 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1274625 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1274625' 00:27:18.054 killing process with pid 1274625 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1274625 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1274625 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:27:18.054 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:18.054 [2024-10-08 18:37:44.081379] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:27:18.054 [2024-10-08 18:37:44.081476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274625 ] 00:27:18.054 [2024-10-08 18:37:44.146824] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.054 [2024-10-08 18:37:44.257373] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.054 [2024-10-08 18:37:45.015696] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name b34edd88-476b-4a5c-a087-4f4622a5452b already exists 00:27:18.054 [2024-10-08 18:37:45.015737] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:b34edd88-476b-4a5c-a087-4f4622a5452b alias for bdev NVMe1n1 00:27:18.054 [2024-10-08 18:37:45.015753] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:18.054 Running I/O for 1 seconds... 00:27:18.054 18419.00 IOPS, 71.95 MiB/s 00:27:18.054 Latency(us) 00:27:18.054 [2024-10-08T16:37:46.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.054 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:18.054 NVMe0n1 : 1.00 18471.23 72.15 0.00 0.00 6919.80 4247.70 15437.37 00:27:18.054 [2024-10-08T16:37:46.591Z] =================================================================================================================== 00:27:18.054 [2024-10-08T16:37:46.591Z] Total : 18471.23 72.15 0.00 0.00 6919.80 4247.70 15437.37 00:27:18.054 Received shutdown signal, test time was about 1.000000 seconds 00:27:18.054 00:27:18.054 Latency(us) 00:27:18.054 [2024-10-08T16:37:46.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.054 [2024-10-08T16:37:46.591Z] =================================================================================================================== 00:27:18.054 [2024-10-08T16:37:46.591Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:18.054 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:18.054 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:18.054 rmmod nvme_tcp 00:27:18.313 rmmod nvme_fabrics 00:27:18.313 rmmod nvme_keyring 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 1274474 ']' 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 1274474 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1274474 ']' 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1274474 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1274474 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1274474' 00:27:18.313 killing process with pid 1274474 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1274474 00:27:18.313 18:37:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1274474 00:27:18.879 18:37:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:18.879 18:37:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:18.879 18:37:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:18.879 18:37:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:27:18.879 18:37:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:18.879 18:37:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:27:18.879 18:37:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:27:18.880 18:37:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:18.880 18:37:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:18.880 18:37:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.880 18:37:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.880 18:37:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.781 18:37:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:20.781 00:27:20.781 real 0m9.800s 00:27:20.781 user 0m15.798s 00:27:20.781 sys 0m3.343s 00:27:20.781 18:37:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.781 18:37:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:20.781 ************************************ 00:27:20.781 END TEST nvmf_multicontroller 00:27:20.781 ************************************ 00:27:20.781 18:37:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:20.781 18:37:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:20.781 18:37:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.781 18:37:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.781 ************************************ 00:27:20.781 START TEST nvmf_aer 00:27:20.781 ************************************ 00:27:20.781 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:21.040 * Looking for test storage... 00:27:21.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:21.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.040 --rc genhtml_branch_coverage=1 00:27:21.040 --rc genhtml_function_coverage=1 00:27:21.040 --rc genhtml_legend=1 00:27:21.040 --rc geninfo_all_blocks=1 00:27:21.040 --rc geninfo_unexecuted_blocks=1 00:27:21.040 00:27:21.040 ' 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:21.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.040 --rc genhtml_branch_coverage=1 00:27:21.040 --rc genhtml_function_coverage=1 00:27:21.040 --rc genhtml_legend=1 00:27:21.040 --rc geninfo_all_blocks=1 00:27:21.040 --rc geninfo_unexecuted_blocks=1 00:27:21.040 00:27:21.040 ' 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:21.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.040 --rc genhtml_branch_coverage=1 00:27:21.040 --rc genhtml_function_coverage=1 00:27:21.040 --rc genhtml_legend=1 00:27:21.040 --rc geninfo_all_blocks=1 00:27:21.040 --rc geninfo_unexecuted_blocks=1 00:27:21.040 00:27:21.040 ' 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:21.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.040 --rc genhtml_branch_coverage=1 00:27:21.040 --rc genhtml_function_coverage=1 00:27:21.040 --rc genhtml_legend=1 00:27:21.040 --rc geninfo_all_blocks=1 00:27:21.040 --rc geninfo_unexecuted_blocks=1 00:27:21.040 00:27:21.040 ' 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.040 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:21.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:27:21.041 18:37:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:24.333 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:24.333 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:24.333 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:24.334 Found net devices under 0000:84:00.0: cvl_0_0 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:24.334 Found net devices under 0000:84:00.1: cvl_0_1 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:24.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:27:24.334 00:27:24.334 --- 10.0.0.2 ping statistics --- 00:27:24.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.334 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:27:24.334 00:27:24.334 --- 10.0.0.1 ping statistics --- 00:27:24.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.334 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=1276991 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 1276991 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1276991 ']' 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:24.334 18:37:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:24.334 [2024-10-08 18:37:52.669820] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:27:24.334 [2024-10-08 18:37:52.669916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.334 [2024-10-08 18:37:52.834984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.594 [2024-10-08 18:37:53.125536] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.594 [2024-10-08 18:37:53.125709] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.594 [2024-10-08 18:37:53.125775] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.594 [2024-10-08 18:37:53.125825] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.594 [2024-10-08 18:37:53.125866] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.594 [2024-10-08 18:37:53.130786] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.594 [2024-10-08 18:37:53.130895] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.594 [2024-10-08 18:37:53.131005] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.594 [2024-10-08 18:37:53.131015] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.160 [2024-10-08 18:37:53.438525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.160 Malloc0 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.160 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.161 [2024-10-08 18:37:53.492845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.161 [ 00:27:25.161 { 00:27:25.161 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:25.161 "subtype": "Discovery", 00:27:25.161 "listen_addresses": [], 00:27:25.161 "allow_any_host": true, 00:27:25.161 "hosts": [] 00:27:25.161 }, 00:27:25.161 { 00:27:25.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.161 "subtype": "NVMe", 00:27:25.161 "listen_addresses": [ 00:27:25.161 { 00:27:25.161 "trtype": "TCP", 00:27:25.161 "adrfam": "IPv4", 00:27:25.161 "traddr": "10.0.0.2", 00:27:25.161 "trsvcid": "4420" 00:27:25.161 } 00:27:25.161 ], 00:27:25.161 "allow_any_host": true, 00:27:25.161 "hosts": [], 00:27:25.161 "serial_number": "SPDK00000000000001", 00:27:25.161 "model_number": "SPDK bdev Controller", 00:27:25.161 "max_namespaces": 2, 00:27:25.161 "min_cntlid": 1, 00:27:25.161 "max_cntlid": 65519, 00:27:25.161 "namespaces": [ 00:27:25.161 { 00:27:25.161 "nsid": 1, 00:27:25.161 "bdev_name": "Malloc0", 00:27:25.161 "name": "Malloc0", 00:27:25.161 "nguid": "CC0F5478E9434149B5BCB98365BD3B13", 00:27:25.161 "uuid": "cc0f5478-e943-4149-b5bc-b98365bd3b13" 00:27:25.161 } 00:27:25.161 ] 00:27:25.161 } 00:27:25.161 ] 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1277142 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:25.161 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.419 Malloc1 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.419 [ 00:27:25.419 { 00:27:25.419 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:25.419 "subtype": "Discovery", 00:27:25.419 "listen_addresses": [], 00:27:25.419 "allow_any_host": true, 00:27:25.419 "hosts": [] 00:27:25.419 }, 00:27:25.419 { 00:27:25.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.419 "subtype": "NVMe", 00:27:25.419 "listen_addresses": [ 00:27:25.419 { 00:27:25.419 "trtype": "TCP", 00:27:25.419 "adrfam": "IPv4", 00:27:25.419 "traddr": "10.0.0.2", 00:27:25.419 "trsvcid": "4420" 00:27:25.419 } 00:27:25.419 ], 00:27:25.419 "allow_any_host": true, 00:27:25.419 "hosts": [], 00:27:25.419 "serial_number": "SPDK00000000000001", 00:27:25.419 "model_number": "SPDK bdev Controller", 00:27:25.419 "max_namespaces": 2, 00:27:25.419 "min_cntlid": 1, 00:27:25.419 "max_cntlid": 65519, 00:27:25.419 "namespaces": [ 00:27:25.419 { 00:27:25.419 "nsid": 1, 00:27:25.419 "bdev_name": "Malloc0", 00:27:25.419 "name": "Malloc0", 00:27:25.419 "nguid": "CC0F5478E9434149B5BCB98365BD3B13", 00:27:25.419 "uuid": "cc0f5478-e943-4149-b5bc-b98365bd3b13" 00:27:25.419 }, 00:27:25.419 { 00:27:25.419 "nsid": 2, 00:27:25.419 "bdev_name": "Malloc1", 00:27:25.419 "name": "Malloc1", 00:27:25.419 "nguid": "CDF8E6C2CABC4DA5AF264849D053F522", 00:27:25.419 "uuid": "cdf8e6c2-cabc-4da5-af26-4849d053f522" 00:27:25.419 } 00:27:25.419 ] 00:27:25.419 } 00:27:25.419 ] 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1277142 00:27:25.419 Asynchronous Event Request test 00:27:25.419 Attaching to 10.0.0.2 00:27:25.419 Attached to 10.0.0.2 00:27:25.419 Registering asynchronous event callbacks... 00:27:25.419 Starting namespace attribute notice tests for all controllers... 00:27:25.419 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:25.419 aer_cb - Changed Namespace 00:27:25.419 Cleaning up... 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.419 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:25.420 rmmod nvme_tcp 00:27:25.420 rmmod nvme_fabrics 00:27:25.420 rmmod nvme_keyring 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 1276991 ']' 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 1276991 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1276991 ']' 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1276991 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:25.420 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1276991 00:27:25.677 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:25.677 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:25.677 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1276991' 00:27:25.677 killing process with pid 1276991 00:27:25.677 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1276991 00:27:25.677 18:37:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1276991 00:27:25.938 18:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:25.938 18:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:25.938 18:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:25.938 18:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:27:25.938 18:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:27:25.938 18:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:25.938 18:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:27:25.938 18:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.938 18:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:25.938 18:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.938 18:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.938 18:37:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.478 18:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:28.478 00:27:28.478 real 0m7.121s 00:27:28.478 user 0m6.066s 00:27:28.478 sys 0m2.935s 00:27:28.478 18:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:28.478 18:37:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:28.478 ************************************ 00:27:28.478 END TEST nvmf_aer 00:27:28.478 ************************************ 00:27:28.478 18:37:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:28.478 18:37:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:28.478 18:37:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:28.478 18:37:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.478 ************************************ 00:27:28.478 START TEST nvmf_async_init 00:27:28.478 ************************************ 00:27:28.478 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:28.478 * Looking for test storage... 00:27:28.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:28.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.479 --rc genhtml_branch_coverage=1 00:27:28.479 --rc genhtml_function_coverage=1 00:27:28.479 --rc genhtml_legend=1 00:27:28.479 --rc geninfo_all_blocks=1 00:27:28.479 --rc geninfo_unexecuted_blocks=1 00:27:28.479 00:27:28.479 ' 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:28.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.479 --rc genhtml_branch_coverage=1 00:27:28.479 --rc genhtml_function_coverage=1 00:27:28.479 --rc genhtml_legend=1 00:27:28.479 --rc geninfo_all_blocks=1 00:27:28.479 --rc geninfo_unexecuted_blocks=1 00:27:28.479 00:27:28.479 ' 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:28.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.479 --rc genhtml_branch_coverage=1 00:27:28.479 --rc genhtml_function_coverage=1 00:27:28.479 --rc genhtml_legend=1 00:27:28.479 --rc geninfo_all_blocks=1 00:27:28.479 --rc geninfo_unexecuted_blocks=1 00:27:28.479 00:27:28.479 ' 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:28.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.479 --rc genhtml_branch_coverage=1 00:27:28.479 --rc genhtml_function_coverage=1 00:27:28.479 --rc genhtml_legend=1 00:27:28.479 --rc geninfo_all_blocks=1 00:27:28.479 --rc geninfo_unexecuted_blocks=1 00:27:28.479 00:27:28.479 ' 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:28.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7ac8934c820c4486a606e7778fe1dde6 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:28.479 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:28.480 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:28.480 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:28.480 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.480 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:28.480 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.480 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:28.480 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:28.480 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:27:28.480 18:37:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.014 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:31.015 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:31.015 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:31.015 Found net devices under 0000:84:00.0: cvl_0_0 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:31.015 Found net devices under 0000:84:00.1: cvl_0_1 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:31.015 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:31.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:27:31.275 00:27:31.275 --- 10.0.0.2 ping statistics --- 00:27:31.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.275 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:27:31.275 00:27:31.275 --- 10.0.0.1 ping statistics --- 00:27:31.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.275 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=1279229 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 1279229 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1279229 ']' 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:31.275 18:37:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.275 [2024-10-08 18:37:59.764999] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:27:31.275 [2024-10-08 18:37:59.765093] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.536 [2024-10-08 18:37:59.879591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.795 [2024-10-08 18:38:00.098448] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.795 [2024-10-08 18:38:00.098553] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.795 [2024-10-08 18:38:00.098589] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.795 [2024-10-08 18:38:00.098619] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.795 [2024-10-08 18:38:00.098645] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.795 [2024-10-08 18:38:00.100032] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.795 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:31.795 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:27:31.795 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:31.795 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:31.795 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.055 [2024-10-08 18:38:00.352604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.055 null0 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7ac8934c820c4486a606e7778fe1dde6 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.055 [2024-10-08 18:38:00.405239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.055 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.315 nvme0n1 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.315 [ 00:27:32.315 { 00:27:32.315 "name": "nvme0n1", 00:27:32.315 "aliases": [ 00:27:32.315 "7ac8934c-820c-4486-a606-e7778fe1dde6" 00:27:32.315 ], 00:27:32.315 "product_name": "NVMe disk", 00:27:32.315 "block_size": 512, 00:27:32.315 "num_blocks": 2097152, 00:27:32.315 "uuid": "7ac8934c-820c-4486-a606-e7778fe1dde6", 00:27:32.315 "numa_id": 1, 00:27:32.315 "assigned_rate_limits": { 00:27:32.315 "rw_ios_per_sec": 0, 00:27:32.315 "rw_mbytes_per_sec": 0, 00:27:32.315 "r_mbytes_per_sec": 0, 00:27:32.315 "w_mbytes_per_sec": 0 00:27:32.315 }, 00:27:32.315 "claimed": false, 00:27:32.315 "zoned": false, 00:27:32.315 "supported_io_types": { 00:27:32.315 "read": true, 00:27:32.315 "write": true, 00:27:32.315 "unmap": false, 00:27:32.315 "flush": true, 00:27:32.315 "reset": true, 00:27:32.315 "nvme_admin": true, 00:27:32.315 "nvme_io": true, 00:27:32.315 "nvme_io_md": false, 00:27:32.315 "write_zeroes": true, 00:27:32.315 "zcopy": false, 00:27:32.315 "get_zone_info": false, 00:27:32.315 "zone_management": false, 00:27:32.315 "zone_append": false, 00:27:32.315 "compare": true, 00:27:32.315 "compare_and_write": true, 00:27:32.315 "abort": true, 00:27:32.315 "seek_hole": false, 00:27:32.315 "seek_data": false, 00:27:32.315 "copy": true, 00:27:32.315 "nvme_iov_md": false 00:27:32.315 }, 00:27:32.315 "memory_domains": [ 00:27:32.315 { 00:27:32.315 "dma_device_id": "system", 00:27:32.315 "dma_device_type": 1 00:27:32.315 } 00:27:32.315 ], 00:27:32.315 "driver_specific": { 00:27:32.315 "nvme": [ 00:27:32.315 { 00:27:32.315 "trid": { 00:27:32.315 "trtype": "TCP", 00:27:32.315 "adrfam": "IPv4", 00:27:32.315 "traddr": "10.0.0.2", 00:27:32.315 "trsvcid": "4420", 00:27:32.315 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:32.315 }, 00:27:32.315 "ctrlr_data": { 00:27:32.315 "cntlid": 1, 00:27:32.315 "vendor_id": "0x8086", 00:27:32.315 "model_number": "SPDK bdev Controller", 00:27:32.315 "serial_number": "00000000000000000000", 00:27:32.315 "firmware_revision": "25.01", 00:27:32.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.315 "oacs": { 00:27:32.315 "security": 0, 00:27:32.315 "format": 0, 00:27:32.315 "firmware": 0, 00:27:32.315 "ns_manage": 0 00:27:32.315 }, 00:27:32.315 "multi_ctrlr": true, 00:27:32.315 "ana_reporting": false 00:27:32.315 }, 00:27:32.315 "vs": { 00:27:32.315 "nvme_version": "1.3" 00:27:32.315 }, 00:27:32.315 "ns_data": { 00:27:32.315 "id": 1, 00:27:32.315 "can_share": true 00:27:32.315 } 00:27:32.315 } 00:27:32.315 ], 00:27:32.315 "mp_policy": "active_passive" 00:27:32.315 } 00:27:32.315 } 00:27:32.315 ] 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.315 [2024-10-08 18:38:00.682412] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:32.315 [2024-10-08 18:38:00.682610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a98560 (9): Bad file descriptor 00:27:32.315 [2024-10-08 18:38:00.816013] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.315 [ 00:27:32.315 { 00:27:32.315 "name": "nvme0n1", 00:27:32.315 "aliases": [ 00:27:32.315 "7ac8934c-820c-4486-a606-e7778fe1dde6" 00:27:32.315 ], 00:27:32.315 "product_name": "NVMe disk", 00:27:32.315 "block_size": 512, 00:27:32.315 "num_blocks": 2097152, 00:27:32.315 "uuid": "7ac8934c-820c-4486-a606-e7778fe1dde6", 00:27:32.315 "numa_id": 1, 00:27:32.315 "assigned_rate_limits": { 00:27:32.315 "rw_ios_per_sec": 0, 00:27:32.315 "rw_mbytes_per_sec": 0, 00:27:32.315 "r_mbytes_per_sec": 0, 00:27:32.315 "w_mbytes_per_sec": 0 00:27:32.315 }, 00:27:32.315 "claimed": false, 00:27:32.315 "zoned": false, 00:27:32.315 "supported_io_types": { 00:27:32.315 "read": true, 00:27:32.315 "write": true, 00:27:32.315 "unmap": false, 00:27:32.315 "flush": true, 00:27:32.315 "reset": true, 00:27:32.315 "nvme_admin": true, 00:27:32.315 "nvme_io": true, 00:27:32.315 "nvme_io_md": false, 00:27:32.315 "write_zeroes": true, 00:27:32.315 "zcopy": false, 00:27:32.315 "get_zone_info": false, 00:27:32.315 "zone_management": false, 00:27:32.315 "zone_append": false, 00:27:32.315 "compare": true, 00:27:32.315 "compare_and_write": true, 00:27:32.315 "abort": true, 00:27:32.315 "seek_hole": false, 00:27:32.315 "seek_data": false, 00:27:32.315 "copy": true, 00:27:32.315 "nvme_iov_md": false 00:27:32.315 }, 00:27:32.315 "memory_domains": [ 00:27:32.315 { 00:27:32.315 "dma_device_id": "system", 00:27:32.315 "dma_device_type": 1 00:27:32.315 } 00:27:32.315 ], 00:27:32.315 "driver_specific": { 00:27:32.315 "nvme": [ 00:27:32.315 { 00:27:32.315 "trid": { 00:27:32.315 "trtype": "TCP", 00:27:32.315 "adrfam": "IPv4", 00:27:32.315 "traddr": "10.0.0.2", 00:27:32.315 "trsvcid": "4420", 00:27:32.315 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:32.315 }, 00:27:32.315 "ctrlr_data": { 00:27:32.315 "cntlid": 2, 00:27:32.315 "vendor_id": "0x8086", 00:27:32.315 "model_number": "SPDK bdev Controller", 00:27:32.315 "serial_number": "00000000000000000000", 00:27:32.315 "firmware_revision": "25.01", 00:27:32.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.315 "oacs": { 00:27:32.315 "security": 0, 00:27:32.315 "format": 0, 00:27:32.315 "firmware": 0, 00:27:32.315 "ns_manage": 0 00:27:32.315 }, 00:27:32.315 "multi_ctrlr": true, 00:27:32.315 "ana_reporting": false 00:27:32.315 }, 00:27:32.315 "vs": { 00:27:32.315 "nvme_version": "1.3" 00:27:32.315 }, 00:27:32.315 "ns_data": { 00:27:32.315 "id": 1, 00:27:32.315 "can_share": true 00:27:32.315 } 00:27:32.315 } 00:27:32.315 ], 00:27:32.315 "mp_policy": "active_passive" 00:27:32.315 } 00:27:32.315 } 00:27:32.315 ] 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.315 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UZVQLEDyub 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UZVQLEDyub 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.UZVQLEDyub 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.574 [2024-10-08 18:38:00.899532] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:32.574 [2024-10-08 18:38:00.899852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.574 18:38:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.574 [2024-10-08 18:38:00.923636] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:32.574 nvme0n1 00:27:32.574 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.574 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:32.574 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.574 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.574 [ 00:27:32.574 { 00:27:32.574 "name": "nvme0n1", 00:27:32.574 "aliases": [ 00:27:32.574 "7ac8934c-820c-4486-a606-e7778fe1dde6" 00:27:32.574 ], 00:27:32.574 "product_name": "NVMe disk", 00:27:32.574 "block_size": 512, 00:27:32.574 "num_blocks": 2097152, 00:27:32.574 "uuid": "7ac8934c-820c-4486-a606-e7778fe1dde6", 00:27:32.574 "numa_id": 1, 00:27:32.574 "assigned_rate_limits": { 00:27:32.574 "rw_ios_per_sec": 0, 00:27:32.574 "rw_mbytes_per_sec": 0, 00:27:32.574 "r_mbytes_per_sec": 0, 00:27:32.574 "w_mbytes_per_sec": 0 00:27:32.574 }, 00:27:32.574 "claimed": false, 00:27:32.574 "zoned": false, 00:27:32.574 "supported_io_types": { 00:27:32.574 "read": true, 00:27:32.574 "write": true, 00:27:32.574 "unmap": false, 00:27:32.574 "flush": true, 00:27:32.574 "reset": true, 00:27:32.574 "nvme_admin": true, 00:27:32.574 "nvme_io": true, 00:27:32.574 "nvme_io_md": false, 00:27:32.574 "write_zeroes": true, 00:27:32.574 "zcopy": false, 00:27:32.574 "get_zone_info": false, 00:27:32.574 "zone_management": false, 00:27:32.574 "zone_append": false, 00:27:32.574 "compare": true, 00:27:32.574 "compare_and_write": true, 00:27:32.574 "abort": true, 00:27:32.574 "seek_hole": false, 00:27:32.574 "seek_data": false, 00:27:32.574 "copy": true, 00:27:32.574 "nvme_iov_md": false 00:27:32.574 }, 00:27:32.574 "memory_domains": [ 00:27:32.574 { 00:27:32.574 "dma_device_id": "system", 00:27:32.574 "dma_device_type": 1 00:27:32.574 } 00:27:32.574 ], 00:27:32.574 "driver_specific": { 00:27:32.574 "nvme": [ 00:27:32.574 { 00:27:32.574 "trid": { 00:27:32.574 "trtype": "TCP", 00:27:32.574 "adrfam": "IPv4", 00:27:32.574 "traddr": "10.0.0.2", 00:27:32.574 "trsvcid": "4421", 00:27:32.574 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:32.574 }, 00:27:32.574 "ctrlr_data": { 00:27:32.574 "cntlid": 3, 00:27:32.574 "vendor_id": "0x8086", 00:27:32.574 "model_number": "SPDK bdev Controller", 00:27:32.574 "serial_number": "00000000000000000000", 00:27:32.574 "firmware_revision": "25.01", 00:27:32.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.574 "oacs": { 00:27:32.574 "security": 0, 00:27:32.574 "format": 0, 00:27:32.574 "firmware": 0, 00:27:32.574 "ns_manage": 0 00:27:32.574 }, 00:27:32.574 "multi_ctrlr": true, 00:27:32.574 "ana_reporting": false 00:27:32.574 }, 00:27:32.574 "vs": { 00:27:32.574 "nvme_version": "1.3" 00:27:32.574 }, 00:27:32.574 "ns_data": { 00:27:32.574 "id": 1, 00:27:32.574 "can_share": true 00:27:32.574 } 00:27:32.574 } 00:27:32.574 ], 00:27:32.574 "mp_policy": "active_passive" 00:27:32.575 } 00:27:32.575 } 00:27:32.575 ] 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.UZVQLEDyub 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:32.575 rmmod nvme_tcp 00:27:32.575 rmmod nvme_fabrics 00:27:32.575 rmmod nvme_keyring 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 1279229 ']' 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 1279229 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1279229 ']' 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1279229 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:32.575 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1279229 00:27:32.833 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:32.833 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:32.833 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1279229' 00:27:32.833 killing process with pid 1279229 00:27:32.833 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1279229 00:27:32.833 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1279229 00:27:33.090 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:33.090 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:33.090 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:33.090 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:27:33.090 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:27:33.090 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:33.090 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:27:33.091 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:33.091 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:33.091 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.091 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.091 18:38:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.994 18:38:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:34.994 00:27:34.994 real 0m6.986s 00:27:34.994 user 0m3.008s 00:27:34.994 sys 0m2.781s 00:27:34.994 18:38:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:34.994 18:38:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:34.994 ************************************ 00:27:34.994 END TEST nvmf_async_init 00:27:34.994 ************************************ 00:27:34.994 18:38:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:34.994 18:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:34.994 18:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:34.994 18:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.254 ************************************ 00:27:35.254 START TEST dma 00:27:35.254 ************************************ 00:27:35.254 18:38:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:35.254 * Looking for test storage... 00:27:35.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.255 18:38:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:35.255 18:38:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:27:35.255 18:38:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:35.513 18:38:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:35.513 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.513 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.513 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.513 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.513 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.513 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.513 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.513 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.513 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:35.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.514 --rc genhtml_branch_coverage=1 00:27:35.514 --rc genhtml_function_coverage=1 00:27:35.514 --rc genhtml_legend=1 00:27:35.514 --rc geninfo_all_blocks=1 00:27:35.514 --rc geninfo_unexecuted_blocks=1 00:27:35.514 00:27:35.514 ' 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:35.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.514 --rc genhtml_branch_coverage=1 00:27:35.514 --rc genhtml_function_coverage=1 00:27:35.514 --rc genhtml_legend=1 00:27:35.514 --rc geninfo_all_blocks=1 00:27:35.514 --rc geninfo_unexecuted_blocks=1 00:27:35.514 00:27:35.514 ' 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:35.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.514 --rc genhtml_branch_coverage=1 00:27:35.514 --rc genhtml_function_coverage=1 00:27:35.514 --rc genhtml_legend=1 00:27:35.514 --rc geninfo_all_blocks=1 00:27:35.514 --rc geninfo_unexecuted_blocks=1 00:27:35.514 00:27:35.514 ' 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:35.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.514 --rc genhtml_branch_coverage=1 00:27:35.514 --rc genhtml_function_coverage=1 00:27:35.514 --rc genhtml_legend=1 00:27:35.514 --rc geninfo_all_blocks=1 00:27:35.514 --rc geninfo_unexecuted_blocks=1 00:27:35.514 00:27:35.514 ' 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:35.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:35.514 00:27:35.514 real 0m0.306s 00:27:35.514 user 0m0.213s 00:27:35.514 sys 0m0.104s 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:35.514 ************************************ 00:27:35.514 END TEST dma 00:27:35.514 ************************************ 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.514 ************************************ 00:27:35.514 START TEST nvmf_identify 00:27:35.514 ************************************ 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:35.514 * Looking for test storage... 00:27:35.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:27:35.514 18:38:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:35.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.774 --rc genhtml_branch_coverage=1 00:27:35.774 --rc genhtml_function_coverage=1 00:27:35.774 --rc genhtml_legend=1 00:27:35.774 --rc geninfo_all_blocks=1 00:27:35.774 --rc geninfo_unexecuted_blocks=1 00:27:35.774 00:27:35.774 ' 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:35.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.774 --rc genhtml_branch_coverage=1 00:27:35.774 --rc genhtml_function_coverage=1 00:27:35.774 --rc genhtml_legend=1 00:27:35.774 --rc geninfo_all_blocks=1 00:27:35.774 --rc geninfo_unexecuted_blocks=1 00:27:35.774 00:27:35.774 ' 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:35.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.774 --rc genhtml_branch_coverage=1 00:27:35.774 --rc genhtml_function_coverage=1 00:27:35.774 --rc genhtml_legend=1 00:27:35.774 --rc geninfo_all_blocks=1 00:27:35.774 --rc geninfo_unexecuted_blocks=1 00:27:35.774 00:27:35.774 ' 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:35.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.774 --rc genhtml_branch_coverage=1 00:27:35.774 --rc genhtml_function_coverage=1 00:27:35.774 --rc genhtml_legend=1 00:27:35.774 --rc geninfo_all_blocks=1 00:27:35.774 --rc geninfo_unexecuted_blocks=1 00:27:35.774 00:27:35.774 ' 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.774 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:35.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:27:35.775 18:38:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.309 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:38.327 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:38.327 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:38.327 Found net devices under 0000:84:00.0: cvl_0_0 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:38.327 Found net devices under 0000:84:00.1: cvl_0_1 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:38.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:27:38.327 00:27:38.327 --- 10.0.0.2 ping statistics --- 00:27:38.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.327 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:27:38.327 00:27:38.327 --- 10.0.0.1 ping statistics --- 00:27:38.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.327 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:38.327 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:38.328 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:38.328 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.328 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1281513 00:27:38.328 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:38.328 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:38.328 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1281513 00:27:38.328 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1281513 ']' 00:27:38.328 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.328 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:38.328 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.328 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:38.328 18:38:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.586 [2024-10-08 18:38:06.847802] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:27:38.586 [2024-10-08 18:38:06.847891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.586 [2024-10-08 18:38:06.927247] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:38.586 [2024-10-08 18:38:07.055442] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.586 [2024-10-08 18:38:07.055514] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.586 [2024-10-08 18:38:07.055531] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.586 [2024-10-08 18:38:07.055545] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.586 [2024-10-08 18:38:07.055557] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.586 [2024-10-08 18:38:07.057520] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.586 [2024-10-08 18:38:07.057578] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.586 [2024-10-08 18:38:07.057604] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:38.586 [2024-10-08 18:38:07.057608] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.844 [2024-10-08 18:38:07.212078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.844 Malloc0 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.844 [2024-10-08 18:38:07.302627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:38.844 [ 00:27:38.844 { 00:27:38.844 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:38.844 "subtype": "Discovery", 00:27:38.844 "listen_addresses": [ 00:27:38.844 { 00:27:38.844 "trtype": "TCP", 00:27:38.844 "adrfam": "IPv4", 00:27:38.844 "traddr": "10.0.0.2", 00:27:38.844 "trsvcid": "4420" 00:27:38.844 } 00:27:38.844 ], 00:27:38.844 "allow_any_host": true, 00:27:38.844 "hosts": [] 00:27:38.844 }, 00:27:38.844 { 00:27:38.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.844 "subtype": "NVMe", 00:27:38.844 "listen_addresses": [ 00:27:38.844 { 00:27:38.844 "trtype": "TCP", 00:27:38.844 "adrfam": "IPv4", 00:27:38.844 "traddr": "10.0.0.2", 00:27:38.844 "trsvcid": "4420" 00:27:38.844 } 00:27:38.844 ], 00:27:38.844 "allow_any_host": true, 00:27:38.844 "hosts": [], 00:27:38.844 "serial_number": "SPDK00000000000001", 00:27:38.844 "model_number": "SPDK bdev Controller", 00:27:38.844 "max_namespaces": 32, 00:27:38.844 "min_cntlid": 1, 00:27:38.844 "max_cntlid": 65519, 00:27:38.844 "namespaces": [ 00:27:38.844 { 00:27:38.844 "nsid": 1, 00:27:38.844 "bdev_name": "Malloc0", 00:27:38.844 "name": "Malloc0", 00:27:38.844 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:38.844 "eui64": "ABCDEF0123456789", 00:27:38.844 "uuid": "a8d6df32-9c82-42d2-9788-6baadff82dc6" 00:27:38.844 } 00:27:38.844 ] 00:27:38.844 } 00:27:38.844 ] 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.844 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:38.844 [2024-10-08 18:38:07.356749] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:27:38.844 [2024-10-08 18:38:07.356801] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281546 ] 00:27:39.106 [2024-10-08 18:38:07.401120] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:39.106 [2024-10-08 18:38:07.401201] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:39.106 [2024-10-08 18:38:07.401212] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:39.106 [2024-10-08 18:38:07.401231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:39.106 [2024-10-08 18:38:07.401246] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:39.106 [2024-10-08 18:38:07.405183] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:39.106 [2024-10-08 18:38:07.405237] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xae1760 0 00:27:39.106 [2024-10-08 18:38:07.412664] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:39.106 [2024-10-08 18:38:07.412691] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:39.106 [2024-10-08 18:38:07.412710] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:39.106 [2024-10-08 18:38:07.412716] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:39.106 [2024-10-08 18:38:07.412756] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.412768] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.412775] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.412792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:39.106 [2024-10-08 18:38:07.412819] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41480, cid 0, qid 0 00:27:39.106 [2024-10-08 18:38:07.420671] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.106 [2024-10-08 18:38:07.420690] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.106 [2024-10-08 18:38:07.420723] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.420732] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41480) on tqpair=0xae1760 00:27:39.106 [2024-10-08 18:38:07.420748] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:39.106 [2024-10-08 18:38:07.420759] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:39.106 [2024-10-08 18:38:07.420769] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:39.106 [2024-10-08 18:38:07.420789] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.420798] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.420809] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.420821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.106 [2024-10-08 18:38:07.420846] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41480, cid 0, qid 0 00:27:39.106 [2024-10-08 18:38:07.420981] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.106 [2024-10-08 18:38:07.421013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.106 [2024-10-08 18:38:07.421021] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.421028] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41480) on tqpair=0xae1760 00:27:39.106 [2024-10-08 18:38:07.421038] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:39.106 [2024-10-08 18:38:07.421051] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:39.106 [2024-10-08 18:38:07.421078] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.421086] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.421092] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.421102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.106 [2024-10-08 18:38:07.421124] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41480, cid 0, qid 0 00:27:39.106 [2024-10-08 18:38:07.421254] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.106 [2024-10-08 18:38:07.421267] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.106 [2024-10-08 18:38:07.421274] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.421280] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41480) on tqpair=0xae1760 00:27:39.106 [2024-10-08 18:38:07.421289] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:39.106 [2024-10-08 18:38:07.421302] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:39.106 [2024-10-08 18:38:07.421314] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.421322] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.421328] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.421338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.106 [2024-10-08 18:38:07.421358] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41480, cid 0, qid 0 00:27:39.106 [2024-10-08 18:38:07.421456] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.106 [2024-10-08 18:38:07.421469] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.106 [2024-10-08 18:38:07.421476] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.421482] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41480) on tqpair=0xae1760 00:27:39.106 [2024-10-08 18:38:07.421490] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:39.106 [2024-10-08 18:38:07.421506] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.421515] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.421520] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.421530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.106 [2024-10-08 18:38:07.421555] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41480, cid 0, qid 0 00:27:39.106 [2024-10-08 18:38:07.421645] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.106 [2024-10-08 18:38:07.421669] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.106 [2024-10-08 18:38:07.421676] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.421683] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41480) on tqpair=0xae1760 00:27:39.106 [2024-10-08 18:38:07.421698] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:39.106 [2024-10-08 18:38:07.421706] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:39.106 [2024-10-08 18:38:07.421720] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:39.106 [2024-10-08 18:38:07.421831] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:39.106 [2024-10-08 18:38:07.421839] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:39.106 [2024-10-08 18:38:07.421853] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.421860] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.421866] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.421877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.106 [2024-10-08 18:38:07.421898] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41480, cid 0, qid 0 00:27:39.106 [2024-10-08 18:38:07.422078] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.106 [2024-10-08 18:38:07.422092] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.106 [2024-10-08 18:38:07.422099] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.422105] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41480) on tqpair=0xae1760 00:27:39.106 [2024-10-08 18:38:07.422113] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:39.106 [2024-10-08 18:38:07.422129] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.422137] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.422143] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.422153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.106 [2024-10-08 18:38:07.422173] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41480, cid 0, qid 0 00:27:39.106 [2024-10-08 18:38:07.422278] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.106 [2024-10-08 18:38:07.422291] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.106 [2024-10-08 18:38:07.422298] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.422304] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41480) on tqpair=0xae1760 00:27:39.106 [2024-10-08 18:38:07.422312] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:39.106 [2024-10-08 18:38:07.422320] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:39.106 [2024-10-08 18:38:07.422332] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:39.106 [2024-10-08 18:38:07.422349] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:39.106 [2024-10-08 18:38:07.422365] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.422373] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.422383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.106 [2024-10-08 18:38:07.422404] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41480, cid 0, qid 0 00:27:39.106 [2024-10-08 18:38:07.422540] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.106 [2024-10-08 18:38:07.422555] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.106 [2024-10-08 18:38:07.422561] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.422579] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xae1760): datao=0, datal=4096, cccid=0 00:27:39.106 [2024-10-08 18:38:07.422586] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb41480) on tqpair(0xae1760): expected_datao=0, payload_size=4096 00:27:39.106 [2024-10-08 18:38:07.422593] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.422610] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.422618] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.462804] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.106 [2024-10-08 18:38:07.462825] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.106 [2024-10-08 18:38:07.462833] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.462841] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41480) on tqpair=0xae1760 00:27:39.106 [2024-10-08 18:38:07.462854] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:39.106 [2024-10-08 18:38:07.462863] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:39.106 [2024-10-08 18:38:07.462871] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:39.106 [2024-10-08 18:38:07.462881] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:39.106 [2024-10-08 18:38:07.462889] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:39.106 [2024-10-08 18:38:07.462897] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:39.106 [2024-10-08 18:38:07.462925] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:39.106 [2024-10-08 18:38:07.462940] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.462947] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.462971] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.462982] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:39.106 [2024-10-08 18:38:07.463007] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41480, cid 0, qid 0 00:27:39.106 [2024-10-08 18:38:07.463139] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.106 [2024-10-08 18:38:07.463153] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.106 [2024-10-08 18:38:07.463160] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463166] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41480) on tqpair=0xae1760 00:27:39.106 [2024-10-08 18:38:07.463178] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463190] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463197] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.463207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.106 [2024-10-08 18:38:07.463217] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463224] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463230] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.463239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.106 [2024-10-08 18:38:07.463248] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463255] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463261] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.463285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.106 [2024-10-08 18:38:07.463294] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463301] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463306] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.463315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.106 [2024-10-08 18:38:07.463323] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:39.106 [2024-10-08 18:38:07.463343] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:39.106 [2024-10-08 18:38:07.463356] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463363] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.463373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.106 [2024-10-08 18:38:07.463398] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41480, cid 0, qid 0 00:27:39.106 [2024-10-08 18:38:07.463409] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41600, cid 1, qid 0 00:27:39.106 [2024-10-08 18:38:07.463416] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41780, cid 2, qid 0 00:27:39.106 [2024-10-08 18:38:07.463423] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.106 [2024-10-08 18:38:07.463430] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41a80, cid 4, qid 0 00:27:39.106 [2024-10-08 18:38:07.463611] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.106 [2024-10-08 18:38:07.463623] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.106 [2024-10-08 18:38:07.463629] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463635] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41a80) on tqpair=0xae1760 00:27:39.106 [2024-10-08 18:38:07.463672] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:39.106 [2024-10-08 18:38:07.463689] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:39.106 [2024-10-08 18:38:07.463709] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463719] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.463734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.106 [2024-10-08 18:38:07.463758] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41a80, cid 4, qid 0 00:27:39.106 [2024-10-08 18:38:07.463889] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.106 [2024-10-08 18:38:07.463904] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.106 [2024-10-08 18:38:07.463911] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463918] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xae1760): datao=0, datal=4096, cccid=4 00:27:39.106 [2024-10-08 18:38:07.463925] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb41a80) on tqpair(0xae1760): expected_datao=0, payload_size=4096 00:27:39.106 [2024-10-08 18:38:07.463932] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463965] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.463973] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.464022] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.106 [2024-10-08 18:38:07.464033] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.106 [2024-10-08 18:38:07.464040] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.464046] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41a80) on tqpair=0xae1760 00:27:39.106 [2024-10-08 18:38:07.464065] nvme_ctrlr.c:4220:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:39.106 [2024-10-08 18:38:07.464102] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.106 [2024-10-08 18:38:07.464112] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xae1760) 00:27:39.106 [2024-10-08 18:38:07.464123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.107 [2024-10-08 18:38:07.464134] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.464140] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.464146] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xae1760) 00:27:39.107 [2024-10-08 18:38:07.464155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.107 [2024-10-08 18:38:07.464177] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41a80, cid 4, qid 0 00:27:39.107 [2024-10-08 18:38:07.464187] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41c00, cid 5, qid 0 00:27:39.107 [2024-10-08 18:38:07.464415] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.107 [2024-10-08 18:38:07.464426] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.107 [2024-10-08 18:38:07.464433] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.464439] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xae1760): datao=0, datal=1024, cccid=4 00:27:39.107 [2024-10-08 18:38:07.464446] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb41a80) on tqpair(0xae1760): expected_datao=0, payload_size=1024 00:27:39.107 [2024-10-08 18:38:07.464453] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.464462] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.464468] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.464476] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.107 [2024-10-08 18:38:07.464485] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.107 [2024-10-08 18:38:07.464491] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.464497] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41c00) on tqpair=0xae1760 00:27:39.107 [2024-10-08 18:38:07.504806] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.107 [2024-10-08 18:38:07.504825] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.107 [2024-10-08 18:38:07.504832] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.504839] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41a80) on tqpair=0xae1760 00:27:39.107 [2024-10-08 18:38:07.504865] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.504877] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xae1760) 00:27:39.107 [2024-10-08 18:38:07.504888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.107 [2024-10-08 18:38:07.504918] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41a80, cid 4, qid 0 00:27:39.107 [2024-10-08 18:38:07.505039] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.107 [2024-10-08 18:38:07.505053] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.107 [2024-10-08 18:38:07.505060] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.505066] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xae1760): datao=0, datal=3072, cccid=4 00:27:39.107 [2024-10-08 18:38:07.505073] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb41a80) on tqpair(0xae1760): expected_datao=0, payload_size=3072 00:27:39.107 [2024-10-08 18:38:07.505081] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.505100] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.505109] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.548666] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.107 [2024-10-08 18:38:07.548684] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.107 [2024-10-08 18:38:07.548691] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.548699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41a80) on tqpair=0xae1760 00:27:39.107 [2024-10-08 18:38:07.548714] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.548723] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xae1760) 00:27:39.107 [2024-10-08 18:38:07.548735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.107 [2024-10-08 18:38:07.548765] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41a80, cid 4, qid 0 00:27:39.107 [2024-10-08 18:38:07.548885] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.107 [2024-10-08 18:38:07.548897] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.107 [2024-10-08 18:38:07.548903] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.548909] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xae1760): datao=0, datal=8, cccid=4 00:27:39.107 [2024-10-08 18:38:07.548917] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb41a80) on tqpair(0xae1760): expected_datao=0, payload_size=8 00:27:39.107 [2024-10-08 18:38:07.548938] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.548948] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.548955] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.591667] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.107 [2024-10-08 18:38:07.591687] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.107 [2024-10-08 18:38:07.591695] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.591702] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41a80) on tqpair=0xae1760 00:27:39.107 ===================================================== 00:27:39.107 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:39.107 ===================================================== 00:27:39.107 Controller Capabilities/Features 00:27:39.107 ================================ 00:27:39.107 Vendor ID: 0000 00:27:39.107 Subsystem Vendor ID: 0000 00:27:39.107 Serial Number: .................... 00:27:39.107 Model Number: ........................................ 00:27:39.107 Firmware Version: 25.01 00:27:39.107 Recommended Arb Burst: 0 00:27:39.107 IEEE OUI Identifier: 00 00 00 00:27:39.107 Multi-path I/O 00:27:39.107 May have multiple subsystem ports: No 00:27:39.107 May have multiple controllers: No 00:27:39.107 Associated with SR-IOV VF: No 00:27:39.107 Max Data Transfer Size: 131072 00:27:39.107 Max Number of Namespaces: 0 00:27:39.107 Max Number of I/O Queues: 1024 00:27:39.107 NVMe Specification Version (VS): 1.3 00:27:39.107 NVMe Specification Version (Identify): 1.3 00:27:39.107 Maximum Queue Entries: 128 00:27:39.107 Contiguous Queues Required: Yes 00:27:39.107 Arbitration Mechanisms Supported 00:27:39.107 Weighted Round Robin: Not Supported 00:27:39.107 Vendor Specific: Not Supported 00:27:39.107 Reset Timeout: 15000 ms 00:27:39.107 Doorbell Stride: 4 bytes 00:27:39.107 NVM Subsystem Reset: Not Supported 00:27:39.107 Command Sets Supported 00:27:39.107 NVM Command Set: Supported 00:27:39.107 Boot Partition: Not Supported 00:27:39.107 Memory Page Size Minimum: 4096 bytes 00:27:39.107 Memory Page Size Maximum: 4096 bytes 00:27:39.107 Persistent Memory Region: Not Supported 00:27:39.107 Optional Asynchronous Events Supported 00:27:39.107 Namespace Attribute Notices: Not Supported 00:27:39.107 Firmware Activation Notices: Not Supported 00:27:39.107 ANA Change Notices: Not Supported 00:27:39.107 PLE Aggregate Log Change Notices: Not Supported 00:27:39.107 LBA Status Info Alert Notices: Not Supported 00:27:39.107 EGE Aggregate Log Change Notices: Not Supported 00:27:39.107 Normal NVM Subsystem Shutdown event: Not Supported 00:27:39.107 Zone Descriptor Change Notices: Not Supported 00:27:39.107 Discovery Log Change Notices: Supported 00:27:39.107 Controller Attributes 00:27:39.107 128-bit Host Identifier: Not Supported 00:27:39.107 Non-Operational Permissive Mode: Not Supported 00:27:39.107 NVM Sets: Not Supported 00:27:39.107 Read Recovery Levels: Not Supported 00:27:39.107 Endurance Groups: Not Supported 00:27:39.107 Predictable Latency Mode: Not Supported 00:27:39.107 Traffic Based Keep ALive: Not Supported 00:27:39.107 Namespace Granularity: Not Supported 00:27:39.107 SQ Associations: Not Supported 00:27:39.107 UUID List: Not Supported 00:27:39.107 Multi-Domain Subsystem: Not Supported 00:27:39.107 Fixed Capacity Management: Not Supported 00:27:39.107 Variable Capacity Management: Not Supported 00:27:39.107 Delete Endurance Group: Not Supported 00:27:39.107 Delete NVM Set: Not Supported 00:27:39.107 Extended LBA Formats Supported: Not Supported 00:27:39.107 Flexible Data Placement Supported: Not Supported 00:27:39.107 00:27:39.107 Controller Memory Buffer Support 00:27:39.107 ================================ 00:27:39.107 Supported: No 00:27:39.107 00:27:39.107 Persistent Memory Region Support 00:27:39.107 ================================ 00:27:39.107 Supported: No 00:27:39.107 00:27:39.107 Admin Command Set Attributes 00:27:39.107 ============================ 00:27:39.107 Security Send/Receive: Not Supported 00:27:39.107 Format NVM: Not Supported 00:27:39.107 Firmware Activate/Download: Not Supported 00:27:39.107 Namespace Management: Not Supported 00:27:39.107 Device Self-Test: Not Supported 00:27:39.107 Directives: Not Supported 00:27:39.107 NVMe-MI: Not Supported 00:27:39.107 Virtualization Management: Not Supported 00:27:39.107 Doorbell Buffer Config: Not Supported 00:27:39.107 Get LBA Status Capability: Not Supported 00:27:39.107 Command & Feature Lockdown Capability: Not Supported 00:27:39.107 Abort Command Limit: 1 00:27:39.107 Async Event Request Limit: 4 00:27:39.107 Number of Firmware Slots: N/A 00:27:39.107 Firmware Slot 1 Read-Only: N/A 00:27:39.107 Firmware Activation Without Reset: N/A 00:27:39.107 Multiple Update Detection Support: N/A 00:27:39.107 Firmware Update Granularity: No Information Provided 00:27:39.107 Per-Namespace SMART Log: No 00:27:39.107 Asymmetric Namespace Access Log Page: Not Supported 00:27:39.107 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:39.107 Command Effects Log Page: Not Supported 00:27:39.107 Get Log Page Extended Data: Supported 00:27:39.107 Telemetry Log Pages: Not Supported 00:27:39.107 Persistent Event Log Pages: Not Supported 00:27:39.107 Supported Log Pages Log Page: May Support 00:27:39.107 Commands Supported & Effects Log Page: Not Supported 00:27:39.107 Feature Identifiers & Effects Log Page:May Support 00:27:39.107 NVMe-MI Commands & Effects Log Page: May Support 00:27:39.107 Data Area 4 for Telemetry Log: Not Supported 00:27:39.107 Error Log Page Entries Supported: 128 00:27:39.107 Keep Alive: Not Supported 00:27:39.107 00:27:39.107 NVM Command Set Attributes 00:27:39.107 ========================== 00:27:39.107 Submission Queue Entry Size 00:27:39.107 Max: 1 00:27:39.107 Min: 1 00:27:39.107 Completion Queue Entry Size 00:27:39.107 Max: 1 00:27:39.107 Min: 1 00:27:39.107 Number of Namespaces: 0 00:27:39.107 Compare Command: Not Supported 00:27:39.107 Write Uncorrectable Command: Not Supported 00:27:39.107 Dataset Management Command: Not Supported 00:27:39.107 Write Zeroes Command: Not Supported 00:27:39.107 Set Features Save Field: Not Supported 00:27:39.107 Reservations: Not Supported 00:27:39.107 Timestamp: Not Supported 00:27:39.107 Copy: Not Supported 00:27:39.107 Volatile Write Cache: Not Present 00:27:39.107 Atomic Write Unit (Normal): 1 00:27:39.107 Atomic Write Unit (PFail): 1 00:27:39.107 Atomic Compare & Write Unit: 1 00:27:39.107 Fused Compare & Write: Supported 00:27:39.107 Scatter-Gather List 00:27:39.107 SGL Command Set: Supported 00:27:39.107 SGL Keyed: Supported 00:27:39.107 SGL Bit Bucket Descriptor: Not Supported 00:27:39.107 SGL Metadata Pointer: Not Supported 00:27:39.107 Oversized SGL: Not Supported 00:27:39.107 SGL Metadata Address: Not Supported 00:27:39.107 SGL Offset: Supported 00:27:39.107 Transport SGL Data Block: Not Supported 00:27:39.107 Replay Protected Memory Block: Not Supported 00:27:39.107 00:27:39.107 Firmware Slot Information 00:27:39.107 ========================= 00:27:39.107 Active slot: 0 00:27:39.107 00:27:39.107 00:27:39.107 Error Log 00:27:39.107 ========= 00:27:39.107 00:27:39.107 Active Namespaces 00:27:39.107 ================= 00:27:39.107 Discovery Log Page 00:27:39.107 ================== 00:27:39.107 Generation Counter: 2 00:27:39.107 Number of Records: 2 00:27:39.107 Record Format: 0 00:27:39.107 00:27:39.107 Discovery Log Entry 0 00:27:39.107 ---------------------- 00:27:39.107 Transport Type: 3 (TCP) 00:27:39.107 Address Family: 1 (IPv4) 00:27:39.107 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:39.107 Entry Flags: 00:27:39.107 Duplicate Returned Information: 1 00:27:39.107 Explicit Persistent Connection Support for Discovery: 1 00:27:39.107 Transport Requirements: 00:27:39.107 Secure Channel: Not Required 00:27:39.107 Port ID: 0 (0x0000) 00:27:39.107 Controller ID: 65535 (0xffff) 00:27:39.107 Admin Max SQ Size: 128 00:27:39.107 Transport Service Identifier: 4420 00:27:39.107 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:39.107 Transport Address: 10.0.0.2 00:27:39.107 Discovery Log Entry 1 00:27:39.107 ---------------------- 00:27:39.107 Transport Type: 3 (TCP) 00:27:39.107 Address Family: 1 (IPv4) 00:27:39.107 Subsystem Type: 2 (NVM Subsystem) 00:27:39.107 Entry Flags: 00:27:39.107 Duplicate Returned Information: 0 00:27:39.107 Explicit Persistent Connection Support for Discovery: 0 00:27:39.107 Transport Requirements: 00:27:39.107 Secure Channel: Not Required 00:27:39.107 Port ID: 0 (0x0000) 00:27:39.107 Controller ID: 65535 (0xffff) 00:27:39.107 Admin Max SQ Size: 128 00:27:39.107 Transport Service Identifier: 4420 00:27:39.107 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:39.107 Transport Address: 10.0.0.2 [2024-10-08 18:38:07.591841] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:39.107 [2024-10-08 18:38:07.591863] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41480) on tqpair=0xae1760 00:27:39.107 [2024-10-08 18:38:07.591875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.107 [2024-10-08 18:38:07.591884] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41600) on tqpair=0xae1760 00:27:39.107 [2024-10-08 18:38:07.591892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.107 [2024-10-08 18:38:07.591900] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41780) on tqpair=0xae1760 00:27:39.107 [2024-10-08 18:38:07.591908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.107 [2024-10-08 18:38:07.591916] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.107 [2024-10-08 18:38:07.591924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.107 [2024-10-08 18:38:07.591952] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.591972] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.591978] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.107 [2024-10-08 18:38:07.591989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.107 [2024-10-08 18:38:07.592029] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.107 [2024-10-08 18:38:07.592190] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.107 [2024-10-08 18:38:07.592201] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.107 [2024-10-08 18:38:07.592208] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.592214] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.107 [2024-10-08 18:38:07.592226] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.592233] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.592239] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.107 [2024-10-08 18:38:07.592249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.107 [2024-10-08 18:38:07.592274] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.107 [2024-10-08 18:38:07.592364] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.107 [2024-10-08 18:38:07.592375] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.107 [2024-10-08 18:38:07.592382] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.592388] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.107 [2024-10-08 18:38:07.592395] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:39.107 [2024-10-08 18:38:07.592403] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:39.107 [2024-10-08 18:38:07.592423] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.592433] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.592439] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.107 [2024-10-08 18:38:07.592449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.107 [2024-10-08 18:38:07.592468] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.107 [2024-10-08 18:38:07.592548] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.107 [2024-10-08 18:38:07.592562] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.107 [2024-10-08 18:38:07.592568] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.592575] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.107 [2024-10-08 18:38:07.592591] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.592600] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.592606] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.107 [2024-10-08 18:38:07.592616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.107 [2024-10-08 18:38:07.592662] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.107 [2024-10-08 18:38:07.592736] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.107 [2024-10-08 18:38:07.592750] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.107 [2024-10-08 18:38:07.592757] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.592764] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.107 [2024-10-08 18:38:07.592780] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.592789] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.107 [2024-10-08 18:38:07.592796] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.592806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.592828] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.592924] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.592953] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.592960] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.592966] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.592983] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.592992] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593013] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.593023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.593044] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.593152] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.593165] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.593171] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593177] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.593194] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593203] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593209] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.593219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.593239] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.593316] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.593332] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.593339] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593345] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.593362] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593370] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593377] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.593386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.593406] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.593483] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.593495] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.593502] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593508] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.593524] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593532] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593538] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.593548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.593568] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.593662] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.593676] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.593683] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593690] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.593707] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593716] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593723] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.593734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.593755] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.593851] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.593865] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.593872] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593879] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.593894] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593903] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.593910] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.593920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.593956] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.594047] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.594058] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.594068] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594075] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.594092] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594100] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594106] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.594116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.594136] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.594210] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.594223] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.594230] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594236] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.594251] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594260] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594266] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.594276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.594296] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.594366] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.594377] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.594383] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594390] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.594405] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594414] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594420] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.594429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.594449] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.594522] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.594534] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.594541] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594547] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.594563] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594572] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594578] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.594588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.594608] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.594714] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.594729] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.594737] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594743] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.594764] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594775] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594782] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.594792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.594814] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.594896] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.594910] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.594916] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594938] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.594956] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594965] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.594971] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.594981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.595017] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.595146] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.595157] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.595163] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595170] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.595185] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595194] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595200] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.595209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.595230] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.595358] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.595370] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.595377] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595383] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.595399] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595408] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595414] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.595424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.595444] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.595520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.595533] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.595539] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595546] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.595561] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595576] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595583] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.595593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.595613] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.595718] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.595734] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.595741] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595748] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.595765] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595774] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595781] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.595792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.595814] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.595891] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.595905] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.595912] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595918] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.595935] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595959] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.595965] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.595975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.595995] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.596093] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.596106] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.596112] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.596119] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.596134] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.596143] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.596149] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.596159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.596179] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.596254] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.596265] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.596272] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.596278] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.596293] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.596302] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.596312] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.596322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.596342] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.596419] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.596431] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.596438] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.596444] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.596459] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.596468] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.596474] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.596484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.596504] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.596580] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.108 [2024-10-08 18:38:07.596593] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.108 [2024-10-08 18:38:07.596600] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.596606] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.108 [2024-10-08 18:38:07.596621] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.596644] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.108 [2024-10-08 18:38:07.600662] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xae1760) 00:27:39.108 [2024-10-08 18:38:07.600679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.108 [2024-10-08 18:38:07.600711] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb41900, cid 3, qid 0 00:27:39.108 [2024-10-08 18:38:07.600849] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.109 [2024-10-08 18:38:07.600861] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.109 [2024-10-08 18:38:07.600868] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.109 [2024-10-08 18:38:07.600875] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb41900) on tqpair=0xae1760 00:27:39.109 [2024-10-08 18:38:07.600888] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:27:39.109 00:27:39.109 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:39.434 [2024-10-08 18:38:07.644099] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:27:39.434 [2024-10-08 18:38:07.644150] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281663 ] 00:27:39.434 [2024-10-08 18:38:07.688484] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:39.434 [2024-10-08 18:38:07.688533] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:39.434 [2024-10-08 18:38:07.688547] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:39.434 [2024-10-08 18:38:07.688565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:39.434 [2024-10-08 18:38:07.688577] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:39.434 [2024-10-08 18:38:07.689085] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:39.434 [2024-10-08 18:38:07.689125] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1828760 0 00:27:39.434 [2024-10-08 18:38:07.695666] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:39.434 [2024-10-08 18:38:07.695693] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:39.434 [2024-10-08 18:38:07.695702] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:39.434 [2024-10-08 18:38:07.695708] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:39.434 [2024-10-08 18:38:07.695738] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.695750] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.695756] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1828760) 00:27:39.434 [2024-10-08 18:38:07.695770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:39.434 [2024-10-08 18:38:07.695797] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888480, cid 0, qid 0 00:27:39.434 [2024-10-08 18:38:07.703677] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.434 [2024-10-08 18:38:07.703704] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.434 [2024-10-08 18:38:07.703712] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.703719] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888480) on tqpair=0x1828760 00:27:39.434 [2024-10-08 18:38:07.703738] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:39.434 [2024-10-08 18:38:07.703749] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:39.434 [2024-10-08 18:38:07.703759] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:39.434 [2024-10-08 18:38:07.703775] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.703784] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.703790] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1828760) 00:27:39.434 [2024-10-08 18:38:07.703801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.434 [2024-10-08 18:38:07.703825] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888480, cid 0, qid 0 00:27:39.434 [2024-10-08 18:38:07.704003] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.434 [2024-10-08 18:38:07.704018] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.434 [2024-10-08 18:38:07.704024] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.704031] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888480) on tqpair=0x1828760 00:27:39.434 [2024-10-08 18:38:07.704038] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:39.434 [2024-10-08 18:38:07.704051] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:39.434 [2024-10-08 18:38:07.704062] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.704069] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.704075] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1828760) 00:27:39.434 [2024-10-08 18:38:07.704085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.434 [2024-10-08 18:38:07.704110] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888480, cid 0, qid 0 00:27:39.434 [2024-10-08 18:38:07.704209] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.434 [2024-10-08 18:38:07.704222] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.434 [2024-10-08 18:38:07.704229] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.704235] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888480) on tqpair=0x1828760 00:27:39.434 [2024-10-08 18:38:07.704243] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:39.434 [2024-10-08 18:38:07.704256] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:39.434 [2024-10-08 18:38:07.704268] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.704275] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.704281] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1828760) 00:27:39.434 [2024-10-08 18:38:07.704291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.434 [2024-10-08 18:38:07.704311] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888480, cid 0, qid 0 00:27:39.434 [2024-10-08 18:38:07.704405] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.434 [2024-10-08 18:38:07.704418] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.434 [2024-10-08 18:38:07.704425] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.704431] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888480) on tqpair=0x1828760 00:27:39.434 [2024-10-08 18:38:07.704439] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:39.434 [2024-10-08 18:38:07.704455] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.704464] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.704470] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1828760) 00:27:39.434 [2024-10-08 18:38:07.704480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.434 [2024-10-08 18:38:07.704500] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888480, cid 0, qid 0 00:27:39.434 [2024-10-08 18:38:07.704583] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.434 [2024-10-08 18:38:07.704595] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.434 [2024-10-08 18:38:07.704601] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.704608] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888480) on tqpair=0x1828760 00:27:39.434 [2024-10-08 18:38:07.704615] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:39.434 [2024-10-08 18:38:07.704622] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:39.434 [2024-10-08 18:38:07.704658] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:39.434 [2024-10-08 18:38:07.704769] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:39.434 [2024-10-08 18:38:07.704776] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:39.434 [2024-10-08 18:38:07.704788] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.704796] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.704806] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1828760) 00:27:39.434 [2024-10-08 18:38:07.704816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.434 [2024-10-08 18:38:07.704838] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888480, cid 0, qid 0 00:27:39.434 [2024-10-08 18:38:07.704974] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.434 [2024-10-08 18:38:07.704986] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.434 [2024-10-08 18:38:07.704993] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.705000] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888480) on tqpair=0x1828760 00:27:39.434 [2024-10-08 18:38:07.705007] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:39.434 [2024-10-08 18:38:07.705023] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.705031] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.705037] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1828760) 00:27:39.434 [2024-10-08 18:38:07.705047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.434 [2024-10-08 18:38:07.705067] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888480, cid 0, qid 0 00:27:39.434 [2024-10-08 18:38:07.705150] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.434 [2024-10-08 18:38:07.705163] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.434 [2024-10-08 18:38:07.705170] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.705176] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888480) on tqpair=0x1828760 00:27:39.434 [2024-10-08 18:38:07.705183] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:39.434 [2024-10-08 18:38:07.705191] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:39.434 [2024-10-08 18:38:07.705204] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:39.434 [2024-10-08 18:38:07.705217] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:39.434 [2024-10-08 18:38:07.705231] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.434 [2024-10-08 18:38:07.705239] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1828760) 00:27:39.434 [2024-10-08 18:38:07.705249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.434 [2024-10-08 18:38:07.705270] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888480, cid 0, qid 0 00:27:39.434 [2024-10-08 18:38:07.705389] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.434 [2024-10-08 18:38:07.705401] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.435 [2024-10-08 18:38:07.705408] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.705414] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1828760): datao=0, datal=4096, cccid=0 00:27:39.435 [2024-10-08 18:38:07.705421] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1888480) on tqpair(0x1828760): expected_datao=0, payload_size=4096 00:27:39.435 [2024-10-08 18:38:07.705428] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.705444] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.705452] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.745806] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.435 [2024-10-08 18:38:07.745823] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.435 [2024-10-08 18:38:07.745830] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.745837] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888480) on tqpair=0x1828760 00:27:39.435 [2024-10-08 18:38:07.745848] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:39.435 [2024-10-08 18:38:07.745857] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:39.435 [2024-10-08 18:38:07.745864] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:39.435 [2024-10-08 18:38:07.745871] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:39.435 [2024-10-08 18:38:07.745878] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:39.435 [2024-10-08 18:38:07.745885] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:39.435 [2024-10-08 18:38:07.745904] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:39.435 [2024-10-08 18:38:07.745918] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.745925] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.745932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1828760) 00:27:39.435 [2024-10-08 18:38:07.745958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:39.435 [2024-10-08 18:38:07.745981] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888480, cid 0, qid 0 00:27:39.435 [2024-10-08 18:38:07.746066] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.435 [2024-10-08 18:38:07.746080] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.435 [2024-10-08 18:38:07.746086] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746093] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888480) on tqpair=0x1828760 00:27:39.435 [2024-10-08 18:38:07.746103] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746110] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746115] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1828760) 00:27:39.435 [2024-10-08 18:38:07.746125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.435 [2024-10-08 18:38:07.746134] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746141] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746146] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1828760) 00:27:39.435 [2024-10-08 18:38:07.746155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.435 [2024-10-08 18:38:07.746164] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746170] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746176] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1828760) 00:27:39.435 [2024-10-08 18:38:07.746184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.435 [2024-10-08 18:38:07.746193] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746199] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746204] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1828760) 00:27:39.435 [2024-10-08 18:38:07.746216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.435 [2024-10-08 18:38:07.746226] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:39.435 [2024-10-08 18:38:07.746250] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:39.435 [2024-10-08 18:38:07.746262] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746269] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1828760) 00:27:39.435 [2024-10-08 18:38:07.746279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.435 [2024-10-08 18:38:07.746301] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888480, cid 0, qid 0 00:27:39.435 [2024-10-08 18:38:07.746312] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888600, cid 1, qid 0 00:27:39.435 [2024-10-08 18:38:07.746319] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888780, cid 2, qid 0 00:27:39.435 [2024-10-08 18:38:07.746326] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888900, cid 3, qid 0 00:27:39.435 [2024-10-08 18:38:07.746333] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888a80, cid 4, qid 0 00:27:39.435 [2024-10-08 18:38:07.746486] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.435 [2024-10-08 18:38:07.746499] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.435 [2024-10-08 18:38:07.746506] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746512] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888a80) on tqpair=0x1828760 00:27:39.435 [2024-10-08 18:38:07.746520] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:39.435 [2024-10-08 18:38:07.746528] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:39.435 [2024-10-08 18:38:07.746541] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:39.435 [2024-10-08 18:38:07.746555] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:39.435 [2024-10-08 18:38:07.746577] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746584] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746590] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1828760) 00:27:39.435 [2024-10-08 18:38:07.746600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:39.435 [2024-10-08 18:38:07.746620] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888a80, cid 4, qid 0 00:27:39.435 [2024-10-08 18:38:07.746730] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.435 [2024-10-08 18:38:07.746745] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.435 [2024-10-08 18:38:07.746752] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746759] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888a80) on tqpair=0x1828760 00:27:39.435 [2024-10-08 18:38:07.746822] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:39.435 [2024-10-08 18:38:07.746841] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:39.435 [2024-10-08 18:38:07.746856] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.746868] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1828760) 00:27:39.435 [2024-10-08 18:38:07.746879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.435 [2024-10-08 18:38:07.746912] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888a80, cid 4, qid 0 00:27:39.435 [2024-10-08 18:38:07.747069] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.435 [2024-10-08 18:38:07.747081] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.435 [2024-10-08 18:38:07.747087] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.747093] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1828760): datao=0, datal=4096, cccid=4 00:27:39.435 [2024-10-08 18:38:07.747101] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1888a80) on tqpair(0x1828760): expected_datao=0, payload_size=4096 00:27:39.435 [2024-10-08 18:38:07.747108] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.747117] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.747124] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.747135] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.435 [2024-10-08 18:38:07.747144] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.435 [2024-10-08 18:38:07.747151] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.747157] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888a80) on tqpair=0x1828760 00:27:39.435 [2024-10-08 18:38:07.747190] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:39.435 [2024-10-08 18:38:07.747207] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:39.435 [2024-10-08 18:38:07.747225] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:39.435 [2024-10-08 18:38:07.747238] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.747245] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1828760) 00:27:39.435 [2024-10-08 18:38:07.747255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.435 [2024-10-08 18:38:07.747276] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888a80, cid 4, qid 0 00:27:39.435 [2024-10-08 18:38:07.747399] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.435 [2024-10-08 18:38:07.747411] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.435 [2024-10-08 18:38:07.747417] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.747423] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1828760): datao=0, datal=4096, cccid=4 00:27:39.435 [2024-10-08 18:38:07.747430] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1888a80) on tqpair(0x1828760): expected_datao=0, payload_size=4096 00:27:39.435 [2024-10-08 18:38:07.747437] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.747446] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.747453] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.435 [2024-10-08 18:38:07.747464] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.436 [2024-10-08 18:38:07.747474] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.436 [2024-10-08 18:38:07.747480] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.747486] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888a80) on tqpair=0x1828760 00:27:39.436 [2024-10-08 18:38:07.747506] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:39.436 [2024-10-08 18:38:07.747526] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:39.436 [2024-10-08 18:38:07.747540] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.747548] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1828760) 00:27:39.436 [2024-10-08 18:38:07.747558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.436 [2024-10-08 18:38:07.747578] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888a80, cid 4, qid 0 00:27:39.436 [2024-10-08 18:38:07.751679] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.436 [2024-10-08 18:38:07.751695] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.436 [2024-10-08 18:38:07.751702] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.751709] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1828760): datao=0, datal=4096, cccid=4 00:27:39.436 [2024-10-08 18:38:07.751717] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1888a80) on tqpair(0x1828760): expected_datao=0, payload_size=4096 00:27:39.436 [2024-10-08 18:38:07.751725] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.751735] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.751742] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.751751] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.436 [2024-10-08 18:38:07.751760] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.436 [2024-10-08 18:38:07.751767] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.751773] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888a80) on tqpair=0x1828760 00:27:39.436 [2024-10-08 18:38:07.751787] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:39.436 [2024-10-08 18:38:07.751802] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:39.436 [2024-10-08 18:38:07.751817] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:39.436 [2024-10-08 18:38:07.751829] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:39.436 [2024-10-08 18:38:07.751838] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:39.436 [2024-10-08 18:38:07.751846] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:39.436 [2024-10-08 18:38:07.751855] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:39.436 [2024-10-08 18:38:07.751862] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:39.436 [2024-10-08 18:38:07.751871] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:39.436 [2024-10-08 18:38:07.751890] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.751899] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1828760) 00:27:39.436 [2024-10-08 18:38:07.751910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.436 [2024-10-08 18:38:07.751921] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.751928] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.751938] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1828760) 00:27:39.436 [2024-10-08 18:38:07.751948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.436 [2024-10-08 18:38:07.751985] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888a80, cid 4, qid 0 00:27:39.436 [2024-10-08 18:38:07.751996] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888c00, cid 5, qid 0 00:27:39.436 [2024-10-08 18:38:07.752155] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.436 [2024-10-08 18:38:07.752169] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.436 [2024-10-08 18:38:07.752176] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.752182] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888a80) on tqpair=0x1828760 00:27:39.436 [2024-10-08 18:38:07.752192] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.436 [2024-10-08 18:38:07.752201] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.436 [2024-10-08 18:38:07.752208] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.752214] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888c00) on tqpair=0x1828760 00:27:39.436 [2024-10-08 18:38:07.752230] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.752238] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1828760) 00:27:39.436 [2024-10-08 18:38:07.752249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.436 [2024-10-08 18:38:07.752268] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888c00, cid 5, qid 0 00:27:39.436 [2024-10-08 18:38:07.752379] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.436 [2024-10-08 18:38:07.752391] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.436 [2024-10-08 18:38:07.752399] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.752405] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888c00) on tqpair=0x1828760 00:27:39.436 [2024-10-08 18:38:07.752421] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.752429] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1828760) 00:27:39.436 [2024-10-08 18:38:07.752440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.436 [2024-10-08 18:38:07.752460] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888c00, cid 5, qid 0 00:27:39.436 [2024-10-08 18:38:07.752540] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.436 [2024-10-08 18:38:07.752553] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.436 [2024-10-08 18:38:07.752560] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.752567] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888c00) on tqpair=0x1828760 00:27:39.436 [2024-10-08 18:38:07.752582] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.752591] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1828760) 00:27:39.436 [2024-10-08 18:38:07.752601] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.436 [2024-10-08 18:38:07.752621] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888c00, cid 5, qid 0 00:27:39.436 [2024-10-08 18:38:07.752723] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.436 [2024-10-08 18:38:07.752737] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.436 [2024-10-08 18:38:07.752744] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.752751] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888c00) on tqpair=0x1828760 00:27:39.436 [2024-10-08 18:38:07.752779] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.752790] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1828760) 00:27:39.436 [2024-10-08 18:38:07.752801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.436 [2024-10-08 18:38:07.752814] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.752822] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1828760) 00:27:39.436 [2024-10-08 18:38:07.752831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.436 [2024-10-08 18:38:07.752843] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.752851] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1828760) 00:27:39.436 [2024-10-08 18:38:07.752861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.436 [2024-10-08 18:38:07.752872] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.752880] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1828760) 00:27:39.436 [2024-10-08 18:38:07.752890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.436 [2024-10-08 18:38:07.752912] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888c00, cid 5, qid 0 00:27:39.436 [2024-10-08 18:38:07.752923] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888a80, cid 4, qid 0 00:27:39.436 [2024-10-08 18:38:07.752946] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888d80, cid 6, qid 0 00:27:39.436 [2024-10-08 18:38:07.752954] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888f00, cid 7, qid 0 00:27:39.436 [2024-10-08 18:38:07.753154] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.436 [2024-10-08 18:38:07.753168] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.436 [2024-10-08 18:38:07.753176] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.753182] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1828760): datao=0, datal=8192, cccid=5 00:27:39.436 [2024-10-08 18:38:07.753189] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1888c00) on tqpair(0x1828760): expected_datao=0, payload_size=8192 00:27:39.436 [2024-10-08 18:38:07.753197] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.753230] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.753241] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.753249] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.436 [2024-10-08 18:38:07.753259] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.436 [2024-10-08 18:38:07.753266] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.753272] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1828760): datao=0, datal=512, cccid=4 00:27:39.436 [2024-10-08 18:38:07.753279] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1888a80) on tqpair(0x1828760): expected_datao=0, payload_size=512 00:27:39.436 [2024-10-08 18:38:07.753286] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.753295] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.753301] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.436 [2024-10-08 18:38:07.753310] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.436 [2024-10-08 18:38:07.753318] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.437 [2024-10-08 18:38:07.753328] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.437 [2024-10-08 18:38:07.753335] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1828760): datao=0, datal=512, cccid=6 00:27:39.437 [2024-10-08 18:38:07.753342] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1888d80) on tqpair(0x1828760): expected_datao=0, payload_size=512 00:27:39.437 [2024-10-08 18:38:07.753349] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.437 [2024-10-08 18:38:07.753358] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.437 [2024-10-08 18:38:07.753364] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.437 [2024-10-08 18:38:07.753372] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:39.437 [2024-10-08 18:38:07.753381] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:39.437 [2024-10-08 18:38:07.753387] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:39.437 [2024-10-08 18:38:07.753394] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1828760): datao=0, datal=4096, cccid=7 00:27:39.437 [2024-10-08 18:38:07.753401] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1888f00) on tqpair(0x1828760): expected_datao=0, payload_size=4096 00:27:39.437 [2024-10-08 18:38:07.753408] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.437 [2024-10-08 18:38:07.753417] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:39.437 [2024-10-08 18:38:07.753423] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:39.437 [2024-10-08 18:38:07.753434] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.437 [2024-10-08 18:38:07.753443] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.437 [2024-10-08 18:38:07.753450] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.437 [2024-10-08 18:38:07.753457] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888c00) on tqpair=0x1828760 00:27:39.437 [2024-10-08 18:38:07.753483] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.437 [2024-10-08 18:38:07.753493] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.437 [2024-10-08 18:38:07.753500] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.437 [2024-10-08 18:38:07.753507] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888a80) on tqpair=0x1828760 00:27:39.437 [2024-10-08 18:38:07.753523] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.437 [2024-10-08 18:38:07.753534] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.437 [2024-10-08 18:38:07.753540] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.437 [2024-10-08 18:38:07.753546] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888d80) on tqpair=0x1828760 00:27:39.437 [2024-10-08 18:38:07.753556] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.437 [2024-10-08 18:38:07.753566] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.437 [2024-10-08 18:38:07.753572] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.437 [2024-10-08 18:38:07.753578] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888f00) on tqpair=0x1828760 00:27:39.437 ===================================================== 00:27:39.437 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:39.437 ===================================================== 00:27:39.437 Controller Capabilities/Features 00:27:39.437 ================================ 00:27:39.437 Vendor ID: 8086 00:27:39.437 Subsystem Vendor ID: 8086 00:27:39.437 Serial Number: SPDK00000000000001 00:27:39.437 Model Number: SPDK bdev Controller 00:27:39.437 Firmware Version: 25.01 00:27:39.437 Recommended Arb Burst: 6 00:27:39.437 IEEE OUI Identifier: e4 d2 5c 00:27:39.437 Multi-path I/O 00:27:39.437 May have multiple subsystem ports: Yes 00:27:39.437 May have multiple controllers: Yes 00:27:39.437 Associated with SR-IOV VF: No 00:27:39.437 Max Data Transfer Size: 131072 00:27:39.437 Max Number of Namespaces: 32 00:27:39.437 Max Number of I/O Queues: 127 00:27:39.437 NVMe Specification Version (VS): 1.3 00:27:39.437 NVMe Specification Version (Identify): 1.3 00:27:39.437 Maximum Queue Entries: 128 00:27:39.437 Contiguous Queues Required: Yes 00:27:39.437 Arbitration Mechanisms Supported 00:27:39.437 Weighted Round Robin: Not Supported 00:27:39.437 Vendor Specific: Not Supported 00:27:39.437 Reset Timeout: 15000 ms 00:27:39.437 Doorbell Stride: 4 bytes 00:27:39.437 NVM Subsystem Reset: Not Supported 00:27:39.437 Command Sets Supported 00:27:39.437 NVM Command Set: Supported 00:27:39.437 Boot Partition: Not Supported 00:27:39.437 Memory Page Size Minimum: 4096 bytes 00:27:39.437 Memory Page Size Maximum: 4096 bytes 00:27:39.437 Persistent Memory Region: Not Supported 00:27:39.437 Optional Asynchronous Events Supported 00:27:39.437 Namespace Attribute Notices: Supported 00:27:39.437 Firmware Activation Notices: Not Supported 00:27:39.437 ANA Change Notices: Not Supported 00:27:39.437 PLE Aggregate Log Change Notices: Not Supported 00:27:39.437 LBA Status Info Alert Notices: Not Supported 00:27:39.437 EGE Aggregate Log Change Notices: Not Supported 00:27:39.437 Normal NVM Subsystem Shutdown event: Not Supported 00:27:39.437 Zone Descriptor Change Notices: Not Supported 00:27:39.437 Discovery Log Change Notices: Not Supported 00:27:39.437 Controller Attributes 00:27:39.437 128-bit Host Identifier: Supported 00:27:39.437 Non-Operational Permissive Mode: Not Supported 00:27:39.437 NVM Sets: Not Supported 00:27:39.437 Read Recovery Levels: Not Supported 00:27:39.437 Endurance Groups: Not Supported 00:27:39.437 Predictable Latency Mode: Not Supported 00:27:39.437 Traffic Based Keep ALive: Not Supported 00:27:39.437 Namespace Granularity: Not Supported 00:27:39.437 SQ Associations: Not Supported 00:27:39.437 UUID List: Not Supported 00:27:39.437 Multi-Domain Subsystem: Not Supported 00:27:39.437 Fixed Capacity Management: Not Supported 00:27:39.437 Variable Capacity Management: Not Supported 00:27:39.437 Delete Endurance Group: Not Supported 00:27:39.437 Delete NVM Set: Not Supported 00:27:39.437 Extended LBA Formats Supported: Not Supported 00:27:39.437 Flexible Data Placement Supported: Not Supported 00:27:39.437 00:27:39.437 Controller Memory Buffer Support 00:27:39.437 ================================ 00:27:39.437 Supported: No 00:27:39.437 00:27:39.437 Persistent Memory Region Support 00:27:39.437 ================================ 00:27:39.437 Supported: No 00:27:39.437 00:27:39.437 Admin Command Set Attributes 00:27:39.437 ============================ 00:27:39.437 Security Send/Receive: Not Supported 00:27:39.437 Format NVM: Not Supported 00:27:39.437 Firmware Activate/Download: Not Supported 00:27:39.437 Namespace Management: Not Supported 00:27:39.437 Device Self-Test: Not Supported 00:27:39.437 Directives: Not Supported 00:27:39.437 NVMe-MI: Not Supported 00:27:39.437 Virtualization Management: Not Supported 00:27:39.437 Doorbell Buffer Config: Not Supported 00:27:39.437 Get LBA Status Capability: Not Supported 00:27:39.437 Command & Feature Lockdown Capability: Not Supported 00:27:39.437 Abort Command Limit: 4 00:27:39.437 Async Event Request Limit: 4 00:27:39.437 Number of Firmware Slots: N/A 00:27:39.437 Firmware Slot 1 Read-Only: N/A 00:27:39.437 Firmware Activation Without Reset: N/A 00:27:39.437 Multiple Update Detection Support: N/A 00:27:39.437 Firmware Update Granularity: No Information Provided 00:27:39.437 Per-Namespace SMART Log: No 00:27:39.437 Asymmetric Namespace Access Log Page: Not Supported 00:27:39.437 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:39.437 Command Effects Log Page: Supported 00:27:39.437 Get Log Page Extended Data: Supported 00:27:39.437 Telemetry Log Pages: Not Supported 00:27:39.437 Persistent Event Log Pages: Not Supported 00:27:39.437 Supported Log Pages Log Page: May Support 00:27:39.437 Commands Supported & Effects Log Page: Not Supported 00:27:39.437 Feature Identifiers & Effects Log Page:May Support 00:27:39.437 NVMe-MI Commands & Effects Log Page: May Support 00:27:39.437 Data Area 4 for Telemetry Log: Not Supported 00:27:39.437 Error Log Page Entries Supported: 128 00:27:39.437 Keep Alive: Supported 00:27:39.437 Keep Alive Granularity: 10000 ms 00:27:39.437 00:27:39.437 NVM Command Set Attributes 00:27:39.437 ========================== 00:27:39.437 Submission Queue Entry Size 00:27:39.437 Max: 64 00:27:39.437 Min: 64 00:27:39.437 Completion Queue Entry Size 00:27:39.437 Max: 16 00:27:39.437 Min: 16 00:27:39.437 Number of Namespaces: 32 00:27:39.437 Compare Command: Supported 00:27:39.437 Write Uncorrectable Command: Not Supported 00:27:39.437 Dataset Management Command: Supported 00:27:39.437 Write Zeroes Command: Supported 00:27:39.437 Set Features Save Field: Not Supported 00:27:39.437 Reservations: Supported 00:27:39.437 Timestamp: Not Supported 00:27:39.437 Copy: Supported 00:27:39.437 Volatile Write Cache: Present 00:27:39.437 Atomic Write Unit (Normal): 1 00:27:39.437 Atomic Write Unit (PFail): 1 00:27:39.437 Atomic Compare & Write Unit: 1 00:27:39.437 Fused Compare & Write: Supported 00:27:39.437 Scatter-Gather List 00:27:39.437 SGL Command Set: Supported 00:27:39.437 SGL Keyed: Supported 00:27:39.437 SGL Bit Bucket Descriptor: Not Supported 00:27:39.437 SGL Metadata Pointer: Not Supported 00:27:39.437 Oversized SGL: Not Supported 00:27:39.437 SGL Metadata Address: Not Supported 00:27:39.437 SGL Offset: Supported 00:27:39.437 Transport SGL Data Block: Not Supported 00:27:39.437 Replay Protected Memory Block: Not Supported 00:27:39.437 00:27:39.437 Firmware Slot Information 00:27:39.437 ========================= 00:27:39.437 Active slot: 1 00:27:39.437 Slot 1 Firmware Revision: 25.01 00:27:39.437 00:27:39.437 00:27:39.437 Commands Supported and Effects 00:27:39.437 ============================== 00:27:39.437 Admin Commands 00:27:39.437 -------------- 00:27:39.437 Get Log Page (02h): Supported 00:27:39.437 Identify (06h): Supported 00:27:39.438 Abort (08h): Supported 00:27:39.438 Set Features (09h): Supported 00:27:39.438 Get Features (0Ah): Supported 00:27:39.438 Asynchronous Event Request (0Ch): Supported 00:27:39.438 Keep Alive (18h): Supported 00:27:39.438 I/O Commands 00:27:39.438 ------------ 00:27:39.438 Flush (00h): Supported LBA-Change 00:27:39.438 Write (01h): Supported LBA-Change 00:27:39.438 Read (02h): Supported 00:27:39.438 Compare (05h): Supported 00:27:39.438 Write Zeroes (08h): Supported LBA-Change 00:27:39.438 Dataset Management (09h): Supported LBA-Change 00:27:39.438 Copy (19h): Supported LBA-Change 00:27:39.438 00:27:39.438 Error Log 00:27:39.438 ========= 00:27:39.438 00:27:39.438 Arbitration 00:27:39.438 =========== 00:27:39.438 Arbitration Burst: 1 00:27:39.438 00:27:39.438 Power Management 00:27:39.438 ================ 00:27:39.438 Number of Power States: 1 00:27:39.438 Current Power State: Power State #0 00:27:39.438 Power State #0: 00:27:39.438 Max Power: 0.00 W 00:27:39.438 Non-Operational State: Operational 00:27:39.438 Entry Latency: Not Reported 00:27:39.438 Exit Latency: Not Reported 00:27:39.438 Relative Read Throughput: 0 00:27:39.438 Relative Read Latency: 0 00:27:39.438 Relative Write Throughput: 0 00:27:39.438 Relative Write Latency: 0 00:27:39.438 Idle Power: Not Reported 00:27:39.438 Active Power: Not Reported 00:27:39.438 Non-Operational Permissive Mode: Not Supported 00:27:39.438 00:27:39.438 Health Information 00:27:39.438 ================== 00:27:39.438 Critical Warnings: 00:27:39.438 Available Spare Space: OK 00:27:39.438 Temperature: OK 00:27:39.438 Device Reliability: OK 00:27:39.438 Read Only: No 00:27:39.438 Volatile Memory Backup: OK 00:27:39.438 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:39.438 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:39.438 Available Spare: 0% 00:27:39.438 Available Spare Threshold: 0% 00:27:39.438 Life Percentage Used:[2024-10-08 18:38:07.753714] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.753726] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1828760) 00:27:39.438 [2024-10-08 18:38:07.753737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.438 [2024-10-08 18:38:07.753759] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888f00, cid 7, qid 0 00:27:39.438 [2024-10-08 18:38:07.753872] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.438 [2024-10-08 18:38:07.753885] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.438 [2024-10-08 18:38:07.753891] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.753898] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888f00) on tqpair=0x1828760 00:27:39.438 [2024-10-08 18:38:07.753959] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:39.438 [2024-10-08 18:38:07.753978] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888480) on tqpair=0x1828760 00:27:39.438 [2024-10-08 18:38:07.753988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.438 [2024-10-08 18:38:07.753996] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888600) on tqpair=0x1828760 00:27:39.438 [2024-10-08 18:38:07.754004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.438 [2024-10-08 18:38:07.754011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888780) on tqpair=0x1828760 00:27:39.438 [2024-10-08 18:38:07.754018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.438 [2024-10-08 18:38:07.754026] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888900) on tqpair=0x1828760 00:27:39.438 [2024-10-08 18:38:07.754033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.438 [2024-10-08 18:38:07.754044] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754052] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754058] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1828760) 00:27:39.438 [2024-10-08 18:38:07.754068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.438 [2024-10-08 18:38:07.754089] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888900, cid 3, qid 0 00:27:39.438 [2024-10-08 18:38:07.754215] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.438 [2024-10-08 18:38:07.754229] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.438 [2024-10-08 18:38:07.754236] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754242] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888900) on tqpair=0x1828760 00:27:39.438 [2024-10-08 18:38:07.754253] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754260] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754266] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1828760) 00:27:39.438 [2024-10-08 18:38:07.754276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.438 [2024-10-08 18:38:07.754301] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888900, cid 3, qid 0 00:27:39.438 [2024-10-08 18:38:07.754395] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.438 [2024-10-08 18:38:07.754408] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.438 [2024-10-08 18:38:07.754414] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754421] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888900) on tqpair=0x1828760 00:27:39.438 [2024-10-08 18:38:07.754428] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:39.438 [2024-10-08 18:38:07.754435] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:39.438 [2024-10-08 18:38:07.754451] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754459] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754465] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1828760) 00:27:39.438 [2024-10-08 18:38:07.754475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.438 [2024-10-08 18:38:07.754495] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888900, cid 3, qid 0 00:27:39.438 [2024-10-08 18:38:07.754576] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.438 [2024-10-08 18:38:07.754587] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.438 [2024-10-08 18:38:07.754594] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754601] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888900) on tqpair=0x1828760 00:27:39.438 [2024-10-08 18:38:07.754616] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754625] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754646] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1828760) 00:27:39.438 [2024-10-08 18:38:07.754668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.438 [2024-10-08 18:38:07.754691] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888900, cid 3, qid 0 00:27:39.438 [2024-10-08 18:38:07.754768] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.438 [2024-10-08 18:38:07.754779] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.438 [2024-10-08 18:38:07.754786] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754793] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888900) on tqpair=0x1828760 00:27:39.438 [2024-10-08 18:38:07.754808] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754817] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754823] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1828760) 00:27:39.438 [2024-10-08 18:38:07.754834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.438 [2024-10-08 18:38:07.754854] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888900, cid 3, qid 0 00:27:39.438 [2024-10-08 18:38:07.754952] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.438 [2024-10-08 18:38:07.754966] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.438 [2024-10-08 18:38:07.754973] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.754979] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888900) on tqpair=0x1828760 00:27:39.438 [2024-10-08 18:38:07.754995] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.755004] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.438 [2024-10-08 18:38:07.755010] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1828760) 00:27:39.438 [2024-10-08 18:38:07.755020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.439 [2024-10-08 18:38:07.755040] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888900, cid 3, qid 0 00:27:39.439 [2024-10-08 18:38:07.755118] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.439 [2024-10-08 18:38:07.755129] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.439 [2024-10-08 18:38:07.755135] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.439 [2024-10-08 18:38:07.755142] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888900) on tqpair=0x1828760 00:27:39.439 [2024-10-08 18:38:07.755157] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.439 [2024-10-08 18:38:07.755166] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.439 [2024-10-08 18:38:07.755172] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1828760) 00:27:39.439 [2024-10-08 18:38:07.755182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.439 [2024-10-08 18:38:07.755201] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888900, cid 3, qid 0 00:27:39.439 [2024-10-08 18:38:07.755278] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.439 [2024-10-08 18:38:07.755294] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.439 [2024-10-08 18:38:07.755301] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.439 [2024-10-08 18:38:07.755308] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888900) on tqpair=0x1828760 00:27:39.439 [2024-10-08 18:38:07.755322] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.439 [2024-10-08 18:38:07.755330] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.439 [2024-10-08 18:38:07.755336] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1828760) 00:27:39.439 [2024-10-08 18:38:07.755346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.439 [2024-10-08 18:38:07.755366] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888900, cid 3, qid 0 00:27:39.439 [2024-10-08 18:38:07.755440] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.439 [2024-10-08 18:38:07.755451] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.439 [2024-10-08 18:38:07.755458] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.439 [2024-10-08 18:38:07.755464] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888900) on tqpair=0x1828760 00:27:39.439 [2024-10-08 18:38:07.755480] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.439 [2024-10-08 18:38:07.755488] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.439 [2024-10-08 18:38:07.755494] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1828760) 00:27:39.439 [2024-10-08 18:38:07.755504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.439 [2024-10-08 18:38:07.755523] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888900, cid 3, qid 0 00:27:39.439 [2024-10-08 18:38:07.755596] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.439 [2024-10-08 18:38:07.755609] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.439 [2024-10-08 18:38:07.755616] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.439 [2024-10-08 18:38:07.755622] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888900) on tqpair=0x1828760 00:27:39.439 [2024-10-08 18:38:07.759662] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:39.439 [2024-10-08 18:38:07.759676] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:39.439 [2024-10-08 18:38:07.759683] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1828760) 00:27:39.439 [2024-10-08 18:38:07.759693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.439 [2024-10-08 18:38:07.759715] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1888900, cid 3, qid 0 00:27:39.439 [2024-10-08 18:38:07.759817] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:39.439 [2024-10-08 18:38:07.759831] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:39.439 [2024-10-08 18:38:07.759838] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:39.439 [2024-10-08 18:38:07.759844] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1888900) on tqpair=0x1828760 00:27:39.439 [2024-10-08 18:38:07.759857] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:27:39.439 0% 00:27:39.439 Data Units Read: 0 00:27:39.439 Data Units Written: 0 00:27:39.439 Host Read Commands: 0 00:27:39.439 Host Write Commands: 0 00:27:39.439 Controller Busy Time: 0 minutes 00:27:39.439 Power Cycles: 0 00:27:39.439 Power On Hours: 0 hours 00:27:39.439 Unsafe Shutdowns: 0 00:27:39.439 Unrecoverable Media Errors: 0 00:27:39.439 Lifetime Error Log Entries: 0 00:27:39.439 Warning Temperature Time: 0 minutes 00:27:39.439 Critical Temperature Time: 0 minutes 00:27:39.439 00:27:39.439 Number of Queues 00:27:39.439 ================ 00:27:39.439 Number of I/O Submission Queues: 127 00:27:39.439 Number of I/O Completion Queues: 127 00:27:39.439 00:27:39.439 Active Namespaces 00:27:39.439 ================= 00:27:39.439 Namespace ID:1 00:27:39.439 Error Recovery Timeout: Unlimited 00:27:39.439 Command Set Identifier: NVM (00h) 00:27:39.439 Deallocate: Supported 00:27:39.439 Deallocated/Unwritten Error: Not Supported 00:27:39.439 Deallocated Read Value: Unknown 00:27:39.439 Deallocate in Write Zeroes: Not Supported 00:27:39.439 Deallocated Guard Field: 0xFFFF 00:27:39.439 Flush: Supported 00:27:39.439 Reservation: Supported 00:27:39.439 Namespace Sharing Capabilities: Multiple Controllers 00:27:39.439 Size (in LBAs): 131072 (0GiB) 00:27:39.439 Capacity (in LBAs): 131072 (0GiB) 00:27:39.439 Utilization (in LBAs): 131072 (0GiB) 00:27:39.439 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:39.439 EUI64: ABCDEF0123456789 00:27:39.439 UUID: a8d6df32-9c82-42d2-9788-6baadff82dc6 00:27:39.439 Thin Provisioning: Not Supported 00:27:39.439 Per-NS Atomic Units: Yes 00:27:39.439 Atomic Boundary Size (Normal): 0 00:27:39.439 Atomic Boundary Size (PFail): 0 00:27:39.439 Atomic Boundary Offset: 0 00:27:39.439 Maximum Single Source Range Length: 65535 00:27:39.439 Maximum Copy Length: 65535 00:27:39.439 Maximum Source Range Count: 1 00:27:39.439 NGUID/EUI64 Never Reused: No 00:27:39.439 Namespace Write Protected: No 00:27:39.439 Number of LBA Formats: 1 00:27:39.439 Current LBA Format: LBA Format #00 00:27:39.439 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:39.439 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:39.439 rmmod nvme_tcp 00:27:39.439 rmmod nvme_fabrics 00:27:39.439 rmmod nvme_keyring 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 1281513 ']' 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 1281513 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1281513 ']' 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1281513 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1281513 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1281513' 00:27:39.439 killing process with pid 1281513 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1281513 00:27:39.439 18:38:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1281513 00:27:40.008 18:38:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:40.008 18:38:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:40.008 18:38:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:40.008 18:38:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:27:40.008 18:38:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:27:40.008 18:38:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:40.008 18:38:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:27:40.008 18:38:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:40.008 18:38:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:40.008 18:38:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.008 18:38:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.008 18:38:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.952 18:38:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:41.952 00:27:41.952 real 0m6.506s 00:27:41.952 user 0m5.587s 00:27:41.952 sys 0m2.497s 00:27:41.952 18:38:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:41.952 18:38:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.952 ************************************ 00:27:41.952 END TEST nvmf_identify 00:27:41.952 ************************************ 00:27:41.952 18:38:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:41.952 18:38:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:41.952 18:38:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:41.952 18:38:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.221 ************************************ 00:27:42.221 START TEST nvmf_perf 00:27:42.221 ************************************ 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:42.221 * Looking for test storage... 00:27:42.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:42.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.221 --rc genhtml_branch_coverage=1 00:27:42.221 --rc genhtml_function_coverage=1 00:27:42.221 --rc genhtml_legend=1 00:27:42.221 --rc geninfo_all_blocks=1 00:27:42.221 --rc geninfo_unexecuted_blocks=1 00:27:42.221 00:27:42.221 ' 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:42.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.221 --rc genhtml_branch_coverage=1 00:27:42.221 --rc genhtml_function_coverage=1 00:27:42.221 --rc genhtml_legend=1 00:27:42.221 --rc geninfo_all_blocks=1 00:27:42.221 --rc geninfo_unexecuted_blocks=1 00:27:42.221 00:27:42.221 ' 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:42.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.221 --rc genhtml_branch_coverage=1 00:27:42.221 --rc genhtml_function_coverage=1 00:27:42.221 --rc genhtml_legend=1 00:27:42.221 --rc geninfo_all_blocks=1 00:27:42.221 --rc geninfo_unexecuted_blocks=1 00:27:42.221 00:27:42.221 ' 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:42.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.221 --rc genhtml_branch_coverage=1 00:27:42.221 --rc genhtml_function_coverage=1 00:27:42.221 --rc genhtml_legend=1 00:27:42.221 --rc geninfo_all_blocks=1 00:27:42.221 --rc geninfo_unexecuted_blocks=1 00:27:42.221 00:27:42.221 ' 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.221 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:42.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:42.222 18:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:45.516 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:45.516 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:45.516 Found net devices under 0000:84:00.0: cvl_0_0 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:45.516 Found net devices under 0000:84:00.1: cvl_0_1 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:27:45.516 00:27:45.516 --- 10.0.0.2 ping statistics --- 00:27:45.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.516 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:27:45.516 00:27:45.516 --- 10.0.0.1 ping statistics --- 00:27:45.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.516 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.516 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=1283744 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 1283744 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1283744 ']' 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:45.517 18:38:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:45.517 [2024-10-08 18:38:13.761645] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:27:45.517 [2024-10-08 18:38:13.761745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.517 [2024-10-08 18:38:13.884527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:45.776 [2024-10-08 18:38:14.113242] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.776 [2024-10-08 18:38:14.113355] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.776 [2024-10-08 18:38:14.113391] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.776 [2024-10-08 18:38:14.113421] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.776 [2024-10-08 18:38:14.113447] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.776 [2024-10-08 18:38:14.117159] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.776 [2024-10-08 18:38:14.117219] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.776 [2024-10-08 18:38:14.117319] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.776 [2024-10-08 18:38:14.117323] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.776 18:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:45.776 18:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:27:45.776 18:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:45.776 18:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:45.776 18:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:45.776 18:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.776 18:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:45.776 18:38:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:49.985 18:38:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:49.985 18:38:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:49.985 18:38:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:27:49.985 18:38:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:49.985 18:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:49.985 18:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:27:49.985 18:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:49.985 18:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:49.985 18:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:50.250 [2024-10-08 18:38:18.573477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.250 18:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:50.508 18:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:50.508 18:38:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:50.766 18:38:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:50.766 18:38:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:51.704 18:38:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:51.964 [2024-10-08 18:38:20.371984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.964 18:38:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:52.901 18:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:27:52.901 18:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:27:52.901 18:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:52.901 18:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:27:53.839 Initializing NVMe Controllers 00:27:53.839 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:27:53.839 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:27:53.839 Initialization complete. Launching workers. 00:27:53.839 ======================================================== 00:27:53.839 Latency(us) 00:27:53.839 Device Information : IOPS MiB/s Average min max 00:27:53.839 PCIE (0000:82:00.0) NSID 1 from core 0: 85238.38 332.96 374.87 42.21 8273.37 00:27:53.839 ======================================================== 00:27:53.839 Total : 85238.38 332.96 374.87 42.21 8273.37 00:27:53.839 00:27:53.839 18:38:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:55.744 Initializing NVMe Controllers 00:27:55.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:55.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:55.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:55.744 Initialization complete. Launching workers. 00:27:55.744 ======================================================== 00:27:55.744 Latency(us) 00:27:55.744 Device Information : IOPS MiB/s Average min max 00:27:55.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 0.31 13034.83 140.17 45794.07 00:27:55.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 17124.26 7148.53 47897.73 00:27:55.744 ======================================================== 00:27:55.744 Total : 140.00 0.55 14816.66 140.17 47897.73 00:27:55.744 00:27:55.744 18:38:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:56.680 Initializing NVMe Controllers 00:27:56.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:56.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:56.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:56.680 Initialization complete. Launching workers. 00:27:56.680 ======================================================== 00:27:56.680 Latency(us) 00:27:56.680 Device Information : IOPS MiB/s Average min max 00:27:56.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8706.98 34.01 3686.82 654.09 8150.16 00:27:56.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3766.99 14.71 8530.31 6155.61 16522.84 00:27:56.680 ======================================================== 00:27:56.680 Total : 12473.98 48.73 5149.50 654.09 16522.84 00:27:56.680 00:27:56.680 18:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:56.680 18:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:56.680 18:38:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.999 Initializing NVMe Controllers 00:27:59.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:59.999 Controller IO queue size 128, less than required. 00:27:59.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:59.999 Controller IO queue size 128, less than required. 00:27:59.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:59.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:59.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:59.999 Initialization complete. Launching workers. 00:27:59.999 ======================================================== 00:27:59.999 Latency(us) 00:27:59.999 Device Information : IOPS MiB/s Average min max 00:27:59.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1344.87 336.22 97423.28 72882.32 186038.25 00:27:59.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 568.44 142.11 235977.97 110604.52 351565.87 00:27:59.999 ======================================================== 00:27:59.999 Total : 1913.31 478.33 138587.82 72882.32 351565.87 00:27:59.999 00:27:59.999 18:38:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:59.999 No valid NVMe controllers or AIO or URING devices found 00:27:59.999 Initializing NVMe Controllers 00:27:59.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:59.999 Controller IO queue size 128, less than required. 00:27:59.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:59.999 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:59.999 Controller IO queue size 128, less than required. 00:27:59.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:59.999 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:59.999 WARNING: Some requested NVMe devices were skipped 00:27:59.999 18:38:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:02.535 Initializing NVMe Controllers 00:28:02.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.535 Controller IO queue size 128, less than required. 00:28:02.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.535 Controller IO queue size 128, less than required. 00:28:02.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:02.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:02.535 Initialization complete. Launching workers. 00:28:02.535 00:28:02.535 ==================== 00:28:02.535 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:02.535 TCP transport: 00:28:02.535 polls: 10745 00:28:02.535 idle_polls: 8330 00:28:02.535 sock_completions: 2415 00:28:02.535 nvme_completions: 4585 00:28:02.535 submitted_requests: 6844 00:28:02.535 queued_requests: 1 00:28:02.535 00:28:02.535 ==================== 00:28:02.535 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:02.535 TCP transport: 00:28:02.535 polls: 10951 00:28:02.535 idle_polls: 8424 00:28:02.535 sock_completions: 2527 00:28:02.535 nvme_completions: 5209 00:28:02.535 submitted_requests: 7920 00:28:02.535 queued_requests: 1 00:28:02.535 ======================================================== 00:28:02.535 Latency(us) 00:28:02.535 Device Information : IOPS MiB/s Average min max 00:28:02.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1146.00 286.50 117258.98 66446.61 208737.61 00:28:02.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1302.00 325.50 99083.34 56511.10 144575.95 00:28:02.535 ======================================================== 00:28:02.535 Total : 2447.99 612.00 107592.03 56511.10 208737.61 00:28:02.535 00:28:02.535 18:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:02.535 18:38:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:02.795 rmmod nvme_tcp 00:28:02.795 rmmod nvme_fabrics 00:28:02.795 rmmod nvme_keyring 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 1283744 ']' 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 1283744 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1283744 ']' 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1283744 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:02.795 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1283744 00:28:03.055 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:03.055 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:03.055 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1283744' 00:28:03.055 killing process with pid 1283744 00:28:03.055 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1283744 00:28:03.055 18:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1283744 00:28:04.963 18:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:04.963 18:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:04.963 18:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:04.963 18:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:28:04.963 18:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:28:04.963 18:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:04.963 18:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:28:04.963 18:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:04.963 18:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:04.963 18:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.963 18:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.963 18:38:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.873 18:38:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:06.873 00:28:06.873 real 0m24.686s 00:28:06.873 user 1m15.929s 00:28:06.873 sys 0m7.095s 00:28:06.873 18:38:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.873 18:38:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:06.873 ************************************ 00:28:06.873 END TEST nvmf_perf 00:28:06.873 ************************************ 00:28:06.873 18:38:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:06.873 18:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:06.873 18:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:06.873 18:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.873 ************************************ 00:28:06.873 START TEST nvmf_fio_host 00:28:06.873 ************************************ 00:28:06.873 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:06.873 * Looking for test storage... 00:28:06.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:06.873 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:06.873 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:28:06.873 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:07.134 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:07.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.135 --rc genhtml_branch_coverage=1 00:28:07.135 --rc genhtml_function_coverage=1 00:28:07.135 --rc genhtml_legend=1 00:28:07.135 --rc geninfo_all_blocks=1 00:28:07.135 --rc geninfo_unexecuted_blocks=1 00:28:07.135 00:28:07.135 ' 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:07.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.135 --rc genhtml_branch_coverage=1 00:28:07.135 --rc genhtml_function_coverage=1 00:28:07.135 --rc genhtml_legend=1 00:28:07.135 --rc geninfo_all_blocks=1 00:28:07.135 --rc geninfo_unexecuted_blocks=1 00:28:07.135 00:28:07.135 ' 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:07.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.135 --rc genhtml_branch_coverage=1 00:28:07.135 --rc genhtml_function_coverage=1 00:28:07.135 --rc genhtml_legend=1 00:28:07.135 --rc geninfo_all_blocks=1 00:28:07.135 --rc geninfo_unexecuted_blocks=1 00:28:07.135 00:28:07.135 ' 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:07.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.135 --rc genhtml_branch_coverage=1 00:28:07.135 --rc genhtml_function_coverage=1 00:28:07.135 --rc genhtml_legend=1 00:28:07.135 --rc geninfo_all_blocks=1 00:28:07.135 --rc geninfo_unexecuted_blocks=1 00:28:07.135 00:28:07.135 ' 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:07.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:07.135 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:07.136 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:07.136 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.136 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.136 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.136 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:07.136 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:07.136 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:07.136 18:38:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:10.426 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.426 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:10.427 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:10.427 Found net devices under 0000:84:00.0: cvl_0_0 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:10.427 Found net devices under 0000:84:00.1: cvl_0_1 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:10.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:28:10.427 00:28:10.427 --- 10.0.0.2 ping statistics --- 00:28:10.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.427 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:28:10.427 00:28:10.427 --- 10.0.0.1 ping statistics --- 00:28:10.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.427 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1287991 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1287991 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1287991 ']' 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:10.427 18:38:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.427 [2024-10-08 18:38:38.631277] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:28:10.427 [2024-10-08 18:38:38.631383] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.427 [2024-10-08 18:38:38.721746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:10.427 [2024-10-08 18:38:38.864122] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.427 [2024-10-08 18:38:38.864202] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.427 [2024-10-08 18:38:38.864223] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.427 [2024-10-08 18:38:38.864239] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.427 [2024-10-08 18:38:38.864263] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.427 [2024-10-08 18:38:38.866574] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.427 [2024-10-08 18:38:38.866640] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.427 [2024-10-08 18:38:38.866712] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.427 [2024-10-08 18:38:38.866716] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.685 18:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:10.686 18:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:28:10.686 18:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:10.943 [2024-10-08 18:38:39.404781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.943 18:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:10.943 18:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:10.943 18:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.943 18:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:11.508 Malloc1 00:28:11.508 18:38:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:11.766 18:38:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:12.333 18:38:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.591 [2024-10-08 18:38:40.986479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.591 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:13.156 18:38:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:13.413 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:13.413 fio-3.35 00:28:13.413 Starting 1 thread 00:28:15.940 00:28:15.940 test: (groupid=0, jobs=1): err= 0: pid=1288475: Tue Oct 8 18:38:44 2024 00:28:15.940 read: IOPS=8851, BW=34.6MiB/s (36.3MB/s)(69.4MiB/2006msec) 00:28:15.940 slat (usec): min=2, max=133, avg= 3.05, stdev= 1.37 00:28:15.940 clat (usec): min=2424, max=13708, avg=7877.54, stdev=633.43 00:28:15.940 lat (usec): min=2449, max=13711, avg=7880.59, stdev=633.33 00:28:15.940 clat percentiles (usec): 00:28:15.940 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:28:15.940 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:28:15.940 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:28:15.940 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[12256], 99.95th=[12911], 00:28:15.940 | 99.99th=[13173] 00:28:15.940 bw ( KiB/s): min=34328, max=36120, per=99.92%, avg=35380.00, stdev=753.69, samples=4 00:28:15.940 iops : min= 8582, max= 9030, avg=8845.00, stdev=188.42, samples=4 00:28:15.940 write: IOPS=8868, BW=34.6MiB/s (36.3MB/s)(69.5MiB/2006msec); 0 zone resets 00:28:15.940 slat (usec): min=2, max=121, avg= 3.19, stdev= 1.31 00:28:15.940 clat (usec): min=1140, max=12356, avg=6496.92, stdev=530.10 00:28:15.940 lat (usec): min=1148, max=12359, avg=6500.11, stdev=530.02 00:28:15.940 clat percentiles (usec): 00:28:15.940 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6128], 00:28:15.940 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6587], 00:28:15.940 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7111], 95.00th=[ 7242], 00:28:15.940 | 99.00th=[ 7635], 99.50th=[ 7767], 99.90th=[10683], 99.95th=[11600], 00:28:15.940 | 99.99th=[12387] 00:28:15.940 bw ( KiB/s): min=35184, max=35672, per=99.96%, avg=35460.00, stdev=229.50, samples=4 00:28:15.940 iops : min= 8796, max= 8918, avg=8865.00, stdev=57.38, samples=4 00:28:15.940 lat (msec) : 2=0.03%, 4=0.12%, 10=99.68%, 20=0.18% 00:28:15.940 cpu : usr=71.02%, sys=27.73%, ctx=72, majf=0, minf=31 00:28:15.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:15.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:15.940 issued rwts: total=17757,17790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:15.940 00:28:15.940 Run status group 0 (all jobs): 00:28:15.940 READ: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=69.4MiB (72.7MB), run=2006-2006msec 00:28:15.940 WRITE: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=69.5MiB (72.9MB), run=2006-2006msec 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:15.940 18:38:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:15.940 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:15.940 fio-3.35 00:28:15.940 Starting 1 thread 00:28:18.467 00:28:18.467 test: (groupid=0, jobs=1): err= 0: pid=1288811: Tue Oct 8 18:38:46 2024 00:28:18.467 read: IOPS=7846, BW=123MiB/s (129MB/s)(246MiB/2007msec) 00:28:18.467 slat (usec): min=3, max=146, avg= 4.83, stdev= 2.02 00:28:18.467 clat (usec): min=2489, max=18734, avg=9536.39, stdev=2235.50 00:28:18.467 lat (usec): min=2493, max=18738, avg=9541.22, stdev=2235.55 00:28:18.467 clat percentiles (usec): 00:28:18.467 | 1.00th=[ 5014], 5.00th=[ 6128], 10.00th=[ 6718], 20.00th=[ 7504], 00:28:18.467 | 30.00th=[ 8160], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10159], 00:28:18.467 | 70.00th=[10945], 80.00th=[11338], 90.00th=[12125], 95.00th=[13173], 00:28:18.467 | 99.00th=[15795], 99.50th=[16581], 99.90th=[18220], 99.95th=[18482], 00:28:18.467 | 99.99th=[18744] 00:28:18.467 bw ( KiB/s): min=53888, max=75648, per=50.53%, avg=63432.00, stdev=9977.24, samples=4 00:28:18.467 iops : min= 3368, max= 4728, avg=3964.50, stdev=623.58, samples=4 00:28:18.467 write: IOPS=4714, BW=73.7MiB/s (77.2MB/s)(130MiB/1763msec); 0 zone resets 00:28:18.467 slat (usec): min=39, max=182, avg=43.96, stdev= 5.94 00:28:18.467 clat (usec): min=5041, max=20563, avg=11910.47, stdev=1883.40 00:28:18.467 lat (usec): min=5082, max=20604, avg=11954.43, stdev=1883.57 00:28:18.467 clat percentiles (usec): 00:28:18.467 | 1.00th=[ 8029], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10421], 00:28:18.467 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[12125], 00:28:18.467 | 70.00th=[12780], 80.00th=[13435], 90.00th=[14353], 95.00th=[15270], 00:28:18.467 | 99.00th=[16909], 99.50th=[17433], 99.90th=[20055], 99.95th=[20317], 00:28:18.467 | 99.99th=[20579] 00:28:18.467 bw ( KiB/s): min=55648, max=78912, per=87.74%, avg=66184.00, stdev=10731.64, samples=4 00:28:18.467 iops : min= 3478, max= 4932, avg=4136.50, stdev=670.73, samples=4 00:28:18.467 lat (msec) : 4=0.12%, 10=42.00%, 20=57.85%, 50=0.03% 00:28:18.467 cpu : usr=83.01%, sys=16.19%, ctx=14, majf=0, minf=65 00:28:18.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:28:18.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.467 issued rwts: total=15748,8312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.467 00:28:18.467 Run status group 0 (all jobs): 00:28:18.467 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=246MiB (258MB), run=2007-2007msec 00:28:18.467 WRITE: bw=73.7MiB/s (77.2MB/s), 73.7MiB/s-73.7MiB/s (77.2MB/s-77.2MB/s), io=130MiB (136MB), run=1763-1763msec 00:28:18.467 18:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:18.725 rmmod nvme_tcp 00:28:18.725 rmmod nvme_fabrics 00:28:18.725 rmmod nvme_keyring 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 1287991 ']' 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 1287991 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1287991 ']' 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1287991 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1287991 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1287991' 00:28:18.725 killing process with pid 1287991 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1287991 00:28:18.725 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1287991 00:28:19.294 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:19.294 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:19.294 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:19.294 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:28:19.294 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:28:19.294 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:19.294 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:28:19.294 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:19.294 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:19.294 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.294 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.294 18:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.198 18:38:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:21.198 00:28:21.198 real 0m14.491s 00:28:21.198 user 0m41.876s 00:28:21.198 sys 0m4.759s 00:28:21.198 18:38:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:21.198 18:38:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.198 ************************************ 00:28:21.198 END TEST nvmf_fio_host 00:28:21.198 ************************************ 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.459 ************************************ 00:28:21.459 START TEST nvmf_failover 00:28:21.459 ************************************ 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:21.459 * Looking for test storage... 00:28:21.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:21.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.459 --rc genhtml_branch_coverage=1 00:28:21.459 --rc genhtml_function_coverage=1 00:28:21.459 --rc genhtml_legend=1 00:28:21.459 --rc geninfo_all_blocks=1 00:28:21.459 --rc geninfo_unexecuted_blocks=1 00:28:21.459 00:28:21.459 ' 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:21.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.459 --rc genhtml_branch_coverage=1 00:28:21.459 --rc genhtml_function_coverage=1 00:28:21.459 --rc genhtml_legend=1 00:28:21.459 --rc geninfo_all_blocks=1 00:28:21.459 --rc geninfo_unexecuted_blocks=1 00:28:21.459 00:28:21.459 ' 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:21.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.459 --rc genhtml_branch_coverage=1 00:28:21.459 --rc genhtml_function_coverage=1 00:28:21.459 --rc genhtml_legend=1 00:28:21.459 --rc geninfo_all_blocks=1 00:28:21.459 --rc geninfo_unexecuted_blocks=1 00:28:21.459 00:28:21.459 ' 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:21.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.459 --rc genhtml_branch_coverage=1 00:28:21.459 --rc genhtml_function_coverage=1 00:28:21.459 --rc genhtml_legend=1 00:28:21.459 --rc geninfo_all_blocks=1 00:28:21.459 --rc geninfo_unexecuted_blocks=1 00:28:21.459 00:28:21.459 ' 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.459 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:21.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:28:21.460 18:38:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:24.067 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:24.067 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:24.067 Found net devices under 0000:84:00.0: cvl_0_0 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:24.067 Found net devices under 0000:84:00.1: cvl_0_1 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.067 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.068 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:24.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:28:24.326 00:28:24.326 --- 10.0.0.2 ping statistics --- 00:28:24.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.326 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:28:24.326 00:28:24.326 --- 10.0.0.1 ping statistics --- 00:28:24.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.326 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=1291162 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 1291162 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1291162 ']' 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.326 18:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:24.326 [2024-10-08 18:38:52.821350] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:28:24.326 [2024-10-08 18:38:52.821452] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.584 [2024-10-08 18:38:52.898991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:24.584 [2024-10-08 18:38:53.008118] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.584 [2024-10-08 18:38:53.008184] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.584 [2024-10-08 18:38:53.008198] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.584 [2024-10-08 18:38:53.008209] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.584 [2024-10-08 18:38:53.008218] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.584 [2024-10-08 18:38:53.009251] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:24.584 [2024-10-08 18:38:53.010670] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:24.584 [2024-10-08 18:38:53.010681] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.841 18:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.841 18:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:28:24.841 18:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:24.841 18:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:24.841 18:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:24.841 18:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.841 18:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:25.098 [2024-10-08 18:38:53.635417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.356 18:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:25.919 Malloc0 00:28:25.919 18:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:26.177 18:38:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:26.742 18:38:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:27.306 [2024-10-08 18:38:55.625185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.306 18:38:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:27.563 [2024-10-08 18:38:55.954128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:27.563 18:38:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:27.820 [2024-10-08 18:38:56.283305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:27.820 18:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1291581 00:28:27.820 18:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:27.820 18:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:27.820 18:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1291581 /var/tmp/bdevperf.sock 00:28:27.820 18:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1291581 ']' 00:28:27.820 18:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:27.820 18:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:27.820 18:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:27.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:27.820 18:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:27.820 18:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:28.385 18:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:28.385 18:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:28:28.385 18:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:28.950 NVMe0n1 00:28:28.950 18:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:29.208 00:28:29.208 18:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1291831 00:28:29.208 18:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:29.208 18:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:28:30.140 18:38:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.705 [2024-10-08 18:38:58.988035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.705 [2024-10-08 18:38:58.988453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.706 [2024-10-08 18:38:58.988466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.706 [2024-10-08 18:38:58.988478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.706 [2024-10-08 18:38:58.988489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.706 [2024-10-08 18:38:58.988501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.706 [2024-10-08 18:38:58.988514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237eb90 is same with the state(6) to be set 00:28:30.706 18:38:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:28:33.986 18:39:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:34.552 00:28:34.552 18:39:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:34.810 [2024-10-08 18:39:03.217263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f640 is same with the state(6) to be set 00:28:34.810 [2024-10-08 18:39:03.217333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f640 is same with the state(6) to be set 00:28:34.810 [2024-10-08 18:39:03.217363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f640 is same with the state(6) to be set 00:28:34.810 [2024-10-08 18:39:03.217377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f640 is same with the state(6) to be set 00:28:34.810 [2024-10-08 18:39:03.217389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f640 is same with the state(6) to be set 00:28:34.810 [2024-10-08 18:39:03.217401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f640 is same with the state(6) to be set 00:28:34.810 [2024-10-08 18:39:03.217413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f640 is same with the state(6) to be set 00:28:34.810 18:39:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:28:38.092 18:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.349 [2024-10-08 18:39:06.736333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.349 18:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:28:39.282 18:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:39.847 [2024-10-08 18:39:08.131453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.847 [2024-10-08 18:39:08.131825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.131837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.131848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.131860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.131872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.131884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.131896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.131917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.131935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.131947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.131961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.131973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 [2024-10-08 18:39:08.132188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244a40 is same with the state(6) to be set 00:28:39.848 18:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1291831 00:28:45.114 { 00:28:45.114 "results": [ 00:28:45.114 { 00:28:45.114 "job": "NVMe0n1", 00:28:45.114 "core_mask": "0x1", 00:28:45.114 "workload": "verify", 00:28:45.114 "status": "finished", 00:28:45.114 "verify_range": { 00:28:45.114 "start": 0, 00:28:45.114 "length": 16384 00:28:45.115 }, 00:28:45.115 "queue_depth": 128, 00:28:45.115 "io_size": 4096, 00:28:45.115 "runtime": 15.005217, 00:28:45.115 "iops": 8402.144400844054, 00:28:45.115 "mibps": 32.820876565797086, 00:28:45.115 "io_failed": 10541, 00:28:45.115 "io_timeout": 0, 00:28:45.115 "avg_latency_us": 14029.148503106411, 00:28:45.115 "min_latency_us": 427.80444444444447, 00:28:45.115 "max_latency_us": 19223.893333333333 00:28:45.115 } 00:28:45.115 ], 00:28:45.115 "core_count": 1 00:28:45.115 } 00:28:45.115 18:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1291581 00:28:45.115 18:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1291581 ']' 00:28:45.115 18:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1291581 00:28:45.115 18:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:28:45.115 18:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:45.115 18:39:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1291581 00:28:45.115 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:45.115 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:45.115 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1291581' 00:28:45.115 killing process with pid 1291581 00:28:45.115 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1291581 00:28:45.115 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1291581 00:28:45.115 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:45.115 [2024-10-08 18:38:56.356223] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:28:45.115 [2024-10-08 18:38:56.356326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291581 ] 00:28:45.115 [2024-10-08 18:38:56.423351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.115 [2024-10-08 18:38:56.537112] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.115 Running I/O for 15 seconds... 00:28:45.115 8255.00 IOPS, 32.25 MiB/s [2024-10-08T16:39:13.652Z] [2024-10-08 18:38:58.990737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.990782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.990813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.990830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.990848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.990862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.990878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.990892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.990908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.990923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.990939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.990954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.990970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.990984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.115 [2024-10-08 18:38:58.991721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.115 [2024-10-08 18:38:58.991736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.115 [2024-10-08 18:38:58.991750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.991766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.991780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.991795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.991810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.991825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.991838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.991853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.991867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.991886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.991901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.991915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.991929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.991944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.991957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.991973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.991986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.116 [2024-10-08 18:38:58.992969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.116 [2024-10-08 18:38:58.992983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.117 [2024-10-08 18:38:58.993016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.117 [2024-10-08 18:38:58.993046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.117 [2024-10-08 18:38:58.993074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.117 [2024-10-08 18:38:58.993103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.117 [2024-10-08 18:38:58.993131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71600 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71608 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71616 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71624 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71632 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71640 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71648 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71656 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71664 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71672 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71680 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71688 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71696 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71704 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71712 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71720 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.993957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.993968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.993979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71728 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.993991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.994004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.994015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.994026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71736 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.994039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.994052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.994062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.994074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71744 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.994086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.994099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.994109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.994120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71752 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.994133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.994146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.994161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.994172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71760 len:8 PRP1 0x0 PRP2 0x0 00:28:45.117 [2024-10-08 18:38:58.994185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.117 [2024-10-08 18:38:58.994198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.117 [2024-10-08 18:38:58.994209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.117 [2024-10-08 18:38:58.994220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71768 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71776 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71784 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71792 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71800 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71808 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71816 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71824 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71832 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71840 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71848 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71856 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71864 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71872 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71880 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.994954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.994964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.994975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71888 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.994989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.995002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.995013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.995024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71896 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.995037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.995050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.995068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.995081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71904 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.995094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.995107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.995118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.995129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71912 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.995142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.995155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.995165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.995177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71920 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.995190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.995203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.995214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.995225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71928 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.995237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.995250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.995261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.995272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71936 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.995284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.995298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.995309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.995323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71944 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.995337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.995350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.995361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.995372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71952 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.995384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.995397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.995407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.118 [2024-10-08 18:38:58.995418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71960 len:8 PRP1 0x0 PRP2 0x0 00:28:45.118 [2024-10-08 18:38:58.995430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.118 [2024-10-08 18:38:58.995443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.118 [2024-10-08 18:38:58.995454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.119 [2024-10-08 18:38:58.995464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71968 len:8 PRP1 0x0 PRP2 0x0 00:28:45.119 [2024-10-08 18:38:58.995476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:38:58.995489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.119 [2024-10-08 18:38:58.995499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.119 [2024-10-08 18:38:58.995510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71200 len:8 PRP1 0x0 PRP2 0x0 00:28:45.119 [2024-10-08 18:38:58.995522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:38:58.995535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.119 [2024-10-08 18:38:58.995545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.119 [2024-10-08 18:38:58.995556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71208 len:8 PRP1 0x0 PRP2 0x0 00:28:45.119 [2024-10-08 18:38:58.995568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:38:58.995632] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13bd4e0 was disconnected and freed. reset controller. 00:28:45.119 [2024-10-08 18:38:58.995658] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:45.119 [2024-10-08 18:38:58.995695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.119 [2024-10-08 18:38:58.995713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:38:58.995728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.119 [2024-10-08 18:38:58.995741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:38:58.995755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.119 [2024-10-08 18:38:58.995779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:38:58.995793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.119 [2024-10-08 18:38:58.995806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:38:58.995819] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.119 [2024-10-08 18:38:58.995864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139acc0 (9): Bad file descriptor 00:28:45.119 [2024-10-08 18:38:58.999086] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.119 [2024-10-08 18:38:59.073802] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:45.119 8092.50 IOPS, 31.61 MiB/s [2024-10-08T16:39:13.656Z] 8306.33 IOPS, 32.45 MiB/s [2024-10-08T16:39:13.656Z] 8399.25 IOPS, 32.81 MiB/s [2024-10-08T16:39:13.656Z] 8416.00 IOPS, 32.88 MiB/s [2024-10-08T16:39:13.656Z] [2024-10-08 18:39:03.217619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.217694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.217723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.217740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.217756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.217771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.217787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.217802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.217817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.217830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.217846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.217859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.217875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.217888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.217903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.217917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.217932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.217946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.217976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.119 [2024-10-08 18:39:03.218386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.119 [2024-10-08 18:39:03.218402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.218976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.218990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.219017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.219045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.219073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.219100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.219133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.219161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.219188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.219217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.219244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.219272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.219299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.219327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.120 [2024-10-08 18:39:03.219355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.120 [2024-10-08 18:39:03.219383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.120 [2024-10-08 18:39:03.219411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.120 [2024-10-08 18:39:03.219439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.120 [2024-10-08 18:39:03.219469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.120 [2024-10-08 18:39:03.219499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.120 [2024-10-08 18:39:03.219527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.120 [2024-10-08 18:39:03.219555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.120 [2024-10-08 18:39:03.219600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.120 [2024-10-08 18:39:03.219628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.120 [2024-10-08 18:39:03.219645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.120 [2024-10-08 18:39:03.219667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.219683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.219697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.219713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.219726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.219741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.219755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.219770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.219784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.219798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.219812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.219827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.219841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.219860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.219874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.219889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.219903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.219918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.219932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.219947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.219960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.219975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.219988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.121 [2024-10-08 18:39:03.220074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.121 [2024-10-08 18:39:03.220893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.121 [2024-10-08 18:39:03.220907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.220922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.220936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.220951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.220965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.220985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:03.221471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.122 [2024-10-08 18:39:03.221518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.122 [2024-10-08 18:39:03.221530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117656 len:8 PRP1 0x0 PRP2 0x0 00:28:45.122 [2024-10-08 18:39:03.221543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221606] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14f3fd0 was disconnected and freed. reset controller. 00:28:45.122 [2024-10-08 18:39:03.221625] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:45.122 [2024-10-08 18:39:03.221666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.122 [2024-10-08 18:39:03.221687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.122 [2024-10-08 18:39:03.221725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.122 [2024-10-08 18:39:03.221751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.122 [2024-10-08 18:39:03.221778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:03.221791] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.122 [2024-10-08 18:39:03.221829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139acc0 (9): Bad file descriptor 00:28:45.122 [2024-10-08 18:39:03.225062] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.122 [2024-10-08 18:39:03.297533] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:45.122 8330.33 IOPS, 32.54 MiB/s [2024-10-08T16:39:13.659Z] 8373.86 IOPS, 32.71 MiB/s [2024-10-08T16:39:13.659Z] 8388.25 IOPS, 32.77 MiB/s [2024-10-08T16:39:13.659Z] 8429.11 IOPS, 32.93 MiB/s [2024-10-08T16:39:13.659Z] 8456.70 IOPS, 33.03 MiB/s [2024-10-08T16:39:13.659Z] [2024-10-08 18:39:08.130983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.122 [2024-10-08 18:39:08.131057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:08.131089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.122 [2024-10-08 18:39:08.131104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:08.131118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.122 [2024-10-08 18:39:08.131130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:08.131144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:45.122 [2024-10-08 18:39:08.131157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:08.131170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139acc0 is same with the state(6) to be set 00:28:45.122 [2024-10-08 18:39:08.134091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.122 [2024-10-08 18:39:08.134119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:08.134163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.122 [2024-10-08 18:39:08.134180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:08.134197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.122 [2024-10-08 18:39:08.134212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:08.134228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.122 [2024-10-08 18:39:08.134242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:08.134258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.122 [2024-10-08 18:39:08.134273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:08.134288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.122 [2024-10-08 18:39:08.134302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:08.134318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.122 [2024-10-08 18:39:08.134332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:08.134347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.122 [2024-10-08 18:39:08.134368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:08.134385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.122 [2024-10-08 18:39:08.134399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.122 [2024-10-08 18:39:08.134414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.134972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.134987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.123 [2024-10-08 18:39:08.135505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.123 [2024-10-08 18:39:08.135521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.135974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.135990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.124 [2024-10-08 18:39:08.136238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.124 [2024-10-08 18:39:08.136267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.124 [2024-10-08 18:39:08.136301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.124 [2024-10-08 18:39:08.136330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.124 [2024-10-08 18:39:08.136359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.124 [2024-10-08 18:39:08.136388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.124 [2024-10-08 18:39:08.136795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.124 [2024-10-08 18:39:08.136809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.136824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.136837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.136853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.136867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.136882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.136895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.136910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.136923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.136938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.136951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.136966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.136979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.136995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.137008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.137043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.137072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.137101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.137130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.137160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.137188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.137218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.137247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.137276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.137305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.125 [2024-10-08 18:39:08.137333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.137381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80656 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.137395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.137429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.137442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80664 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.137455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.137480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.137491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80672 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.137504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.137529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.137540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80680 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.137553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.137578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.137589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80688 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.137602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.137626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.137637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80696 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.137657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.137683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.137701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80704 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.137714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.137738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.137749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80712 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.137762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.137785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.137796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80720 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.137809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.137837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.137848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80728 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.137860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.137884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.137895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80736 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.137908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.137931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.137942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80744 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.137954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.137966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.137977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.137988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80752 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.138000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.138013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.138024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.125 [2024-10-08 18:39:08.138035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80760 len:8 PRP1 0x0 PRP2 0x0 00:28:45.125 [2024-10-08 18:39:08.138047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.125 [2024-10-08 18:39:08.138059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.125 [2024-10-08 18:39:08.138070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.126 [2024-10-08 18:39:08.138081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80768 len:8 PRP1 0x0 PRP2 0x0 00:28:45.126 [2024-10-08 18:39:08.138094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.126 [2024-10-08 18:39:08.138106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.126 [2024-10-08 18:39:08.138117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.126 [2024-10-08 18:39:08.138128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80776 len:8 PRP1 0x0 PRP2 0x0 00:28:45.126 [2024-10-08 18:39:08.138140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.126 [2024-10-08 18:39:08.138153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.126 [2024-10-08 18:39:08.138163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.126 [2024-10-08 18:39:08.138174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80784 len:8 PRP1 0x0 PRP2 0x0 00:28:45.126 [2024-10-08 18:39:08.138190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.126 [2024-10-08 18:39:08.138203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.126 [2024-10-08 18:39:08.138213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.126 [2024-10-08 18:39:08.138224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80792 len:8 PRP1 0x0 PRP2 0x0 00:28:45.126 [2024-10-08 18:39:08.138236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.126 [2024-10-08 18:39:08.138249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.126 [2024-10-08 18:39:08.138260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.126 [2024-10-08 18:39:08.138270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80800 len:8 PRP1 0x0 PRP2 0x0 00:28:45.126 [2024-10-08 18:39:08.138282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.126 [2024-10-08 18:39:08.138302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.126 [2024-10-08 18:39:08.138320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.126 [2024-10-08 18:39:08.138332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80808 len:8 PRP1 0x0 PRP2 0x0 00:28:45.126 [2024-10-08 18:39:08.138345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.126 [2024-10-08 18:39:08.138358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.126 [2024-10-08 18:39:08.138369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.126 [2024-10-08 18:39:08.138380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80816 len:8 PRP1 0x0 PRP2 0x0 00:28:45.126 [2024-10-08 18:39:08.138393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.126 [2024-10-08 18:39:08.138453] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14f6810 was disconnected and freed. reset controller. 00:28:45.126 [2024-10-08 18:39:08.138471] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:45.126 [2024-10-08 18:39:08.138487] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.126 [2024-10-08 18:39:08.141722] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.126 [2024-10-08 18:39:08.141759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139acc0 (9): Bad file descriptor 00:28:45.126 [2024-10-08 18:39:08.263507] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:45.126 8365.55 IOPS, 32.68 MiB/s [2024-10-08T16:39:13.663Z] 8379.25 IOPS, 32.73 MiB/s [2024-10-08T16:39:13.663Z] 8370.54 IOPS, 32.70 MiB/s [2024-10-08T16:39:13.663Z] 8392.21 IOPS, 32.78 MiB/s 00:28:45.126 Latency(us) 00:28:45.126 [2024-10-08T16:39:13.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.126 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:45.126 Verification LBA range: start 0x0 length 0x4000 00:28:45.126 NVMe0n1 : 15.01 8402.14 32.82 702.49 0.00 14029.15 427.80 19223.89 00:28:45.126 [2024-10-08T16:39:13.663Z] =================================================================================================================== 00:28:45.126 [2024-10-08T16:39:13.663Z] Total : 8402.14 32.82 702.49 0.00 14029.15 427.80 19223.89 00:28:45.126 Received shutdown signal, test time was about 15.000000 seconds 00:28:45.126 00:28:45.126 Latency(us) 00:28:45.126 [2024-10-08T16:39:13.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.126 [2024-10-08T16:39:13.663Z] =================================================================================================================== 00:28:45.126 [2024-10-08T16:39:13.663Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1294186 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1294186 /var/tmp/bdevperf.sock 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1294186 ']' 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:45.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:28:45.126 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:45.692 [2024-10-08 18:39:13.972528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:45.692 18:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:46.258 [2024-10-08 18:39:14.666757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:46.258 18:39:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:46.823 NVMe0n1 00:28:46.823 18:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:47.389 00:28:47.389 18:39:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:47.954 00:28:47.954 18:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:47.954 18:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:28:48.212 18:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:48.777 18:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:28:52.058 18:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:52.058 18:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:28:52.058 18:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1294985 00:28:52.058 18:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:52.058 18:39:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1294985 00:28:53.431 { 00:28:53.431 "results": [ 00:28:53.431 { 00:28:53.431 "job": "NVMe0n1", 00:28:53.431 "core_mask": "0x1", 00:28:53.431 "workload": "verify", 00:28:53.431 "status": "finished", 00:28:53.431 "verify_range": { 00:28:53.431 "start": 0, 00:28:53.431 "length": 16384 00:28:53.431 }, 00:28:53.431 "queue_depth": 128, 00:28:53.431 "io_size": 4096, 00:28:53.431 "runtime": 1.010519, 00:28:53.431 "iops": 8829.126419196473, 00:28:53.431 "mibps": 34.48877507498622, 00:28:53.431 "io_failed": 0, 00:28:53.431 "io_timeout": 0, 00:28:53.431 "avg_latency_us": 14437.05656695476, 00:28:53.431 "min_latency_us": 825.2681481481482, 00:28:53.431 "max_latency_us": 12913.01925925926 00:28:53.431 } 00:28:53.431 ], 00:28:53.431 "core_count": 1 00:28:53.431 } 00:28:53.431 18:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:53.431 [2024-10-08 18:39:13.331006] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:28:53.431 [2024-10-08 18:39:13.331125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294186 ] 00:28:53.431 [2024-10-08 18:39:13.396894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.431 [2024-10-08 18:39:13.504660] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.431 [2024-10-08 18:39:16.990481] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:53.431 [2024-10-08 18:39:16.990561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.431 [2024-10-08 18:39:16.990584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.431 [2024-10-08 18:39:16.990600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.431 [2024-10-08 18:39:16.990613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.431 [2024-10-08 18:39:16.990627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.431 [2024-10-08 18:39:16.990663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.431 [2024-10-08 18:39:16.990678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:53.431 [2024-10-08 18:39:16.990692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:53.431 [2024-10-08 18:39:16.990705] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:53.431 [2024-10-08 18:39:16.990753] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:53.431 [2024-10-08 18:39:16.990784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x863cc0 (9): Bad file descriptor 00:28:53.431 [2024-10-08 18:39:17.043835] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:53.431 Running I/O for 1 seconds... 00:28:53.431 8793.00 IOPS, 34.35 MiB/s 00:28:53.431 Latency(us) 00:28:53.431 [2024-10-08T16:39:21.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.431 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:53.431 Verification LBA range: start 0x0 length 0x4000 00:28:53.431 NVMe0n1 : 1.01 8829.13 34.49 0.00 0.00 14437.06 825.27 12913.02 00:28:53.431 [2024-10-08T16:39:21.968Z] =================================================================================================================== 00:28:53.431 [2024-10-08T16:39:21.968Z] Total : 8829.13 34.49 0.00 0.00 14437.06 825.27 12913.02 00:28:53.431 18:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:53.431 18:39:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:28:53.689 18:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:54.622 18:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:54.622 18:39:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:28:54.879 18:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:55.812 18:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:28:59.094 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:59.094 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:28:59.094 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1294186 00:28:59.094 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1294186 ']' 00:28:59.094 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1294186 00:28:59.094 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:28:59.094 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.094 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1294186 00:28:59.094 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:59.094 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:59.094 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1294186' 00:28:59.094 killing process with pid 1294186 00:28:59.094 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1294186 00:28:59.094 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1294186 00:28:59.352 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:28:59.352 18:39:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.610 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:59.610 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:59.610 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:28:59.610 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:59.610 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:28:59.610 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.610 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:28:59.610 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.610 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.610 rmmod nvme_tcp 00:28:59.610 rmmod nvme_fabrics 00:28:59.610 rmmod nvme_keyring 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 1291162 ']' 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 1291162 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1291162 ']' 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1291162 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1291162 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1291162' 00:28:59.868 killing process with pid 1291162 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1291162 00:28:59.868 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1291162 00:29:00.125 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:00.126 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:00.126 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:00.126 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:29:00.126 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:29:00.126 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:00.126 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:29:00.126 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:00.126 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:00.126 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.126 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.126 18:39:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.706 18:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:02.706 00:29:02.706 real 0m40.920s 00:29:02.706 user 2m26.848s 00:29:02.706 sys 0m7.463s 00:29:02.706 18:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:02.706 18:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:02.706 ************************************ 00:29:02.706 END TEST nvmf_failover 00:29:02.706 ************************************ 00:29:02.706 18:39:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:02.706 18:39:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:02.706 18:39:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:02.706 18:39:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.706 ************************************ 00:29:02.706 START TEST nvmf_host_discovery 00:29:02.706 ************************************ 00:29:02.706 18:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:02.706 * Looking for test storage... 00:29:02.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.706 18:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:02.706 18:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:29:02.706 18:39:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:02.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.706 --rc genhtml_branch_coverage=1 00:29:02.706 --rc genhtml_function_coverage=1 00:29:02.706 --rc genhtml_legend=1 00:29:02.706 --rc geninfo_all_blocks=1 00:29:02.706 --rc geninfo_unexecuted_blocks=1 00:29:02.706 00:29:02.706 ' 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:02.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.706 --rc genhtml_branch_coverage=1 00:29:02.706 --rc genhtml_function_coverage=1 00:29:02.706 --rc genhtml_legend=1 00:29:02.706 --rc geninfo_all_blocks=1 00:29:02.706 --rc geninfo_unexecuted_blocks=1 00:29:02.706 00:29:02.706 ' 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:02.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.706 --rc genhtml_branch_coverage=1 00:29:02.706 --rc genhtml_function_coverage=1 00:29:02.706 --rc genhtml_legend=1 00:29:02.706 --rc geninfo_all_blocks=1 00:29:02.706 --rc geninfo_unexecuted_blocks=1 00:29:02.706 00:29:02.706 ' 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:02.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.706 --rc genhtml_branch_coverage=1 00:29:02.706 --rc genhtml_function_coverage=1 00:29:02.706 --rc genhtml_legend=1 00:29:02.706 --rc geninfo_all_blocks=1 00:29:02.706 --rc geninfo_unexecuted_blocks=1 00:29:02.706 00:29:02.706 ' 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.706 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:02.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:29:02.707 18:39:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:05.992 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:05.993 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:05.993 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:05.993 Found net devices under 0000:84:00.0: cvl_0_0 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:05.993 Found net devices under 0000:84:00.1: cvl_0_1 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.993 18:39:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:05.993 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:05.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:29:05.993 00:29:05.993 --- 10.0.0.2 ping statistics --- 00:29:05.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.993 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:29:05.993 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:29:05.993 00:29:05.993 --- 10.0.0.1 ping statistics --- 00:29:05.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.993 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:29:05.993 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.993 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:29:05.993 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:05.993 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.993 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:05.993 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:05.993 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.993 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:05.993 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:05.993 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:05.994 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:05.994 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:05.994 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:05.994 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=1297995 00:29:05.994 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:05.994 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 1297995 00:29:05.994 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1297995 ']' 00:29:05.994 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.994 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.994 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.994 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.994 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:05.994 [2024-10-08 18:39:34.112396] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:29:05.994 [2024-10-08 18:39:34.112490] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.994 [2024-10-08 18:39:34.224703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.994 [2024-10-08 18:39:34.443106] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.994 [2024-10-08 18:39:34.443222] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.994 [2024-10-08 18:39:34.443259] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.994 [2024-10-08 18:39:34.443289] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.994 [2024-10-08 18:39:34.443315] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.994 [2024-10-08 18:39:34.444546] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:06.253 [2024-10-08 18:39:34.772224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:06.253 [2024-10-08 18:39:34.780930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.253 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:06.511 null0 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:06.511 null1 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1298067 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1298067 /tmp/host.sock 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1298067 ']' 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:06.511 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.511 18:39:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:06.511 [2024-10-08 18:39:34.910785] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:29:06.511 [2024-10-08 18:39:34.910896] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298067 ] 00:29:06.771 [2024-10-08 18:39:35.053676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.771 [2024-10-08 18:39:35.269428] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:07.030 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:07.288 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.547 [2024-10-08 18:39:35.884214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.547 18:39:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:07.547 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.807 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:29:07.807 18:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:29:08.066 [2024-10-08 18:39:36.523370] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:08.066 [2024-10-08 18:39:36.523443] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:08.066 [2024-10-08 18:39:36.523502] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:08.326 [2024-10-08 18:39:36.612883] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:08.326 [2024-10-08 18:39:36.676092] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:08.326 [2024-10-08 18:39:36.676153] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:08.584 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:08.584 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:08.584 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:08.584 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:08.584 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.584 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.584 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:08.584 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:08.584 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:08.584 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:08.844 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:09.103 [2024-10-08 18:39:37.590369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:09.103 [2024-10-08 18:39:37.591760] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:09.103 [2024-10-08 18:39:37.591854] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:09.103 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:09.363 [2024-10-08 18:39:37.678818] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:09.363 18:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:29:09.622 [2024-10-08 18:39:37.939831] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:09.622 [2024-10-08 18:39:37.939891] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:09.622 [2024-10-08 18:39:37.939915] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.561 [2024-10-08 18:39:38.923887] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:10.561 [2024-10-08 18:39:38.923962] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:10.561 [2024-10-08 18:39:38.926107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.561 [2024-10-08 18:39:38.926186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.561 [2024-10-08 18:39:38.926226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.561 [2024-10-08 18:39:38.926261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.561 [2024-10-08 18:39:38.926309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.561 [2024-10-08 18:39:38.926352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.561 [2024-10-08 18:39:38.926385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.561 [2024-10-08 18:39:38.926418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.561 [2024-10-08 18:39:38.926450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5cd0 is same with the state(6) to be set 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:10.561 [2024-10-08 18:39:38.936420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5cd0 (9): Bad file descriptor 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.561 [2024-10-08 18:39:38.946486] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:10.561 [2024-10-08 18:39:38.946829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.561 [2024-10-08 18:39:38.946863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c5cd0 with addr=10.0.0.2, port=4420 00:29:10.561 [2024-10-08 18:39:38.946881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5cd0 is same with the state(6) to be set 00:29:10.561 [2024-10-08 18:39:38.946906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5cd0 (9): Bad file descriptor 00:29:10.561 [2024-10-08 18:39:38.946977] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:10.561 [2024-10-08 18:39:38.947021] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:10.561 [2024-10-08 18:39:38.947059] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:10.561 [2024-10-08 18:39:38.947110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.561 [2024-10-08 18:39:38.956637] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:10.561 [2024-10-08 18:39:38.956857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.561 [2024-10-08 18:39:38.956889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c5cd0 with addr=10.0.0.2, port=4420 00:29:10.561 [2024-10-08 18:39:38.956913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5cd0 is same with the state(6) to be set 00:29:10.561 [2024-10-08 18:39:38.956966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5cd0 (9): Bad file descriptor 00:29:10.561 [2024-10-08 18:39:38.957018] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:10.561 [2024-10-08 18:39:38.957054] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:10.561 [2024-10-08 18:39:38.957087] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:10.561 [2024-10-08 18:39:38.957166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.561 [2024-10-08 18:39:38.966799] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:10.561 [2024-10-08 18:39:38.967126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.561 [2024-10-08 18:39:38.967196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c5cd0 with addr=10.0.0.2, port=4420 00:29:10.561 [2024-10-08 18:39:38.967237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5cd0 is same with the state(6) to be set 00:29:10.561 [2024-10-08 18:39:38.967292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5cd0 (9): Bad file descriptor 00:29:10.561 [2024-10-08 18:39:38.967408] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:10.561 [2024-10-08 18:39:38.967458] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:10.561 [2024-10-08 18:39:38.967493] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:10.561 [2024-10-08 18:39:38.967543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.561 [2024-10-08 18:39:38.976947] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:10.561 [2024-10-08 18:39:38.977281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.561 [2024-10-08 18:39:38.977351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c5cd0 with addr=10.0.0.2, port=4420 00:29:10.561 [2024-10-08 18:39:38.977393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5cd0 is same with the state(6) to be set 00:29:10.561 [2024-10-08 18:39:38.977448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5cd0 (9): Bad file descriptor 00:29:10.561 [2024-10-08 18:39:38.977528] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:10.561 [2024-10-08 18:39:38.977571] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:10.561 [2024-10-08 18:39:38.977606] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:10.561 [2024-10-08 18:39:38.977675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:10.561 18:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:10.561 [2024-10-08 18:39:38.987093] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:10.561 [2024-10-08 18:39:38.987374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.561 [2024-10-08 18:39:38.987443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c5cd0 with addr=10.0.0.2, port=4420 00:29:10.561 [2024-10-08 18:39:38.987484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5cd0 is same with the state(6) to be set 00:29:10.561 [2024-10-08 18:39:38.987539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5cd0 (9): Bad file descriptor 00:29:10.561 [2024-10-08 18:39:38.987623] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:10.561 [2024-10-08 18:39:38.987690] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:10.561 [2024-10-08 18:39:38.987726] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:10.561 [2024-10-08 18:39:38.987774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.561 [2024-10-08 18:39:38.997258] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:10.561 [2024-10-08 18:39:38.997576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.561 [2024-10-08 18:39:38.997646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c5cd0 with addr=10.0.0.2, port=4420 00:29:10.561 [2024-10-08 18:39:38.997711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5cd0 is same with the state(6) to be set 00:29:10.561 [2024-10-08 18:39:38.997767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5cd0 (9): Bad file descriptor 00:29:10.561 [2024-10-08 18:39:38.997817] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:10.561 [2024-10-08 18:39:38.997852] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:10.561 [2024-10-08 18:39:38.997885] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:10.561 [2024-10-08 18:39:38.997934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.561 [2024-10-08 18:39:39.007403] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:10.561 [2024-10-08 18:39:39.007696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.561 [2024-10-08 18:39:39.007767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c5cd0 with addr=10.0.0.2, port=4420 00:29:10.561 [2024-10-08 18:39:39.007807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c5cd0 is same with the state(6) to be set 00:29:10.561 [2024-10-08 18:39:39.007861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5cd0 (9): Bad file descriptor 00:29:10.561 [2024-10-08 18:39:39.007912] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:10.561 [2024-10-08 18:39:39.007946] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:10.562 [2024-10-08 18:39:39.007993] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:10.562 [2024-10-08 18:39:39.008042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.562 [2024-10-08 18:39:39.010314] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:10.562 [2024-10-08 18:39:39.010380] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:10.562 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.822 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.823 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.083 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:29:11.083 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:29:11.083 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:29:11.083 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:29:11.083 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:11.083 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.083 18:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:12.023 [2024-10-08 18:39:40.476120] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:12.023 [2024-10-08 18:39:40.476184] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:12.023 [2024-10-08 18:39:40.476239] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:12.283 [2024-10-08 18:39:40.604809] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:12.283 [2024-10-08 18:39:40.712525] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:12.283 [2024-10-08 18:39:40.712623] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:12.283 request: 00:29:12.283 { 00:29:12.283 "name": "nvme", 00:29:12.283 "trtype": "tcp", 00:29:12.283 "traddr": "10.0.0.2", 00:29:12.283 "adrfam": "ipv4", 00:29:12.283 "trsvcid": "8009", 00:29:12.283 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:12.283 "wait_for_attach": true, 00:29:12.283 "method": "bdev_nvme_start_discovery", 00:29:12.283 "req_id": 1 00:29:12.283 } 00:29:12.283 Got JSON-RPC error response 00:29:12.283 response: 00:29:12.283 { 00:29:12.283 "code": -17, 00:29:12.283 "message": "File exists" 00:29:12.283 } 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.283 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:12.543 request: 00:29:12.543 { 00:29:12.543 "name": "nvme_second", 00:29:12.543 "trtype": "tcp", 00:29:12.543 "traddr": "10.0.0.2", 00:29:12.543 "adrfam": "ipv4", 00:29:12.543 "trsvcid": "8009", 00:29:12.543 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:12.543 "wait_for_attach": true, 00:29:12.543 "method": "bdev_nvme_start_discovery", 00:29:12.543 "req_id": 1 00:29:12.543 } 00:29:12.543 Got JSON-RPC error response 00:29:12.543 response: 00:29:12.543 { 00:29:12.543 "code": -17, 00:29:12.543 "message": "File exists" 00:29:12.543 } 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.543 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:12.544 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:29:12.544 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:12.544 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.544 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:12.544 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:12.544 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:12.544 18:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:12.544 18:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.544 18:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:12.544 18:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:12.544 18:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:29:12.544 18:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:12.544 18:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:12.544 18:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:12.544 18:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:12.544 18:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:12.544 18:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:12.544 18:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.544 18:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:13.563 [2024-10-08 18:39:42.076672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.563 [2024-10-08 18:39:42.076747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ede70 with addr=10.0.0.2, port=8010 00:29:13.563 [2024-10-08 18:39:42.076780] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:13.563 [2024-10-08 18:39:42.076797] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:13.563 [2024-10-08 18:39:42.076812] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:14.942 [2024-10-08 18:39:43.079161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-10-08 18:39:43.079275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ede70 with addr=10.0.0.2, port=8010 00:29:14.942 [2024-10-08 18:39:43.079342] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:14.942 [2024-10-08 18:39:43.079377] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:14.942 [2024-10-08 18:39:43.079408] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:15.880 [2024-10-08 18:39:44.081215] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:15.880 request: 00:29:15.880 { 00:29:15.880 "name": "nvme_second", 00:29:15.880 "trtype": "tcp", 00:29:15.880 "traddr": "10.0.0.2", 00:29:15.880 "adrfam": "ipv4", 00:29:15.880 "trsvcid": "8010", 00:29:15.880 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:15.880 "wait_for_attach": false, 00:29:15.880 "attach_timeout_ms": 3000, 00:29:15.880 "method": "bdev_nvme_start_discovery", 00:29:15.880 "req_id": 1 00:29:15.880 } 00:29:15.880 Got JSON-RPC error response 00:29:15.880 response: 00:29:15.880 { 00:29:15.880 "code": -110, 00:29:15.880 "message": "Connection timed out" 00:29:15.880 } 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1298067 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.880 rmmod nvme_tcp 00:29:15.880 rmmod nvme_fabrics 00:29:15.880 rmmod nvme_keyring 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 1297995 ']' 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 1297995 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1297995 ']' 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1297995 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1297995 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1297995' 00:29:15.880 killing process with pid 1297995 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1297995 00:29:15.880 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1297995 00:29:16.141 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:16.141 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:16.141 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:16.141 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:29:16.141 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:16.141 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:29:16.141 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:29:16.141 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.141 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:16.141 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.141 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.141 18:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.681 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.681 00:29:18.681 real 0m15.843s 00:29:18.681 user 0m23.104s 00:29:18.681 sys 0m4.181s 00:29:18.681 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:18.681 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:18.681 ************************************ 00:29:18.681 END TEST nvmf_host_discovery 00:29:18.681 ************************************ 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.682 ************************************ 00:29:18.682 START TEST nvmf_host_multipath_status 00:29:18.682 ************************************ 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:18.682 * Looking for test storage... 00:29:18.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:18.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.682 --rc genhtml_branch_coverage=1 00:29:18.682 --rc genhtml_function_coverage=1 00:29:18.682 --rc genhtml_legend=1 00:29:18.682 --rc geninfo_all_blocks=1 00:29:18.682 --rc geninfo_unexecuted_blocks=1 00:29:18.682 00:29:18.682 ' 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:18.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.682 --rc genhtml_branch_coverage=1 00:29:18.682 --rc genhtml_function_coverage=1 00:29:18.682 --rc genhtml_legend=1 00:29:18.682 --rc geninfo_all_blocks=1 00:29:18.682 --rc geninfo_unexecuted_blocks=1 00:29:18.682 00:29:18.682 ' 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:18.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.682 --rc genhtml_branch_coverage=1 00:29:18.682 --rc genhtml_function_coverage=1 00:29:18.682 --rc genhtml_legend=1 00:29:18.682 --rc geninfo_all_blocks=1 00:29:18.682 --rc geninfo_unexecuted_blocks=1 00:29:18.682 00:29:18.682 ' 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:18.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.682 --rc genhtml_branch_coverage=1 00:29:18.682 --rc genhtml_function_coverage=1 00:29:18.682 --rc genhtml_legend=1 00:29:18.682 --rc geninfo_all_blocks=1 00:29:18.682 --rc geninfo_unexecuted_blocks=1 00:29:18.682 00:29:18.682 ' 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:18.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:18.682 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.683 18:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:21.214 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:21.215 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:21.215 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:21.215 Found net devices under 0000:84:00.0: cvl_0_0 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:21.215 Found net devices under 0000:84:00.1: cvl_0_1 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:21.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:29:21.215 00:29:21.215 --- 10.0.0.2 ping statistics --- 00:29:21.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.215 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:29:21.215 00:29:21.215 --- 10.0.0.1 ping statistics --- 00:29:21.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.215 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=1301324 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 1301324 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1301324 ']' 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:21.215 18:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:21.474 [2024-10-08 18:39:49.839764] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:29:21.474 [2024-10-08 18:39:49.839934] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.474 [2024-10-08 18:39:50.002594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:21.732 [2024-10-08 18:39:50.211725] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.732 [2024-10-08 18:39:50.211845] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.732 [2024-10-08 18:39:50.211882] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.732 [2024-10-08 18:39:50.211911] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.732 [2024-10-08 18:39:50.211936] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.732 [2024-10-08 18:39:50.213754] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.732 [2024-10-08 18:39:50.213770] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.991 18:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:21.991 18:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:29:21.991 18:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:21.991 18:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:21.991 18:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:21.991 18:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.991 18:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1301324 00:29:21.991 18:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:22.559 [2024-10-08 18:39:50.810888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.559 18:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:23.128 Malloc0 00:29:23.128 18:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:23.385 18:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:23.951 18:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.210 [2024-10-08 18:39:52.570880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.210 18:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:24.469 [2024-10-08 18:39:52.912081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:24.469 18:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1301736 00:29:24.470 18:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:24.470 18:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:24.470 18:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1301736 /var/tmp/bdevperf.sock 00:29:24.470 18:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1301736 ']' 00:29:24.470 18:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:24.470 18:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.470 18:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:24.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:24.470 18:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.470 18:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:25.409 18:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:25.409 18:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:29:25.409 18:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:25.977 18:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:26.917 Nvme0n1 00:29:26.917 18:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:27.488 Nvme0n1 00:29:27.748 18:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:27.748 18:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:29:29.653 18:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:29:29.653 18:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:30.221 18:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:30.480 18:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:29:31.858 18:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:29:31.858 18:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:31.858 18:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:31.858 18:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:31.858 18:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:31.858 18:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:31.858 18:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:31.858 18:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:32.425 18:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:32.425 18:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:32.425 18:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:32.425 18:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:32.991 18:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:32.991 18:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:32.991 18:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:32.991 18:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:33.250 18:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:33.250 18:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:33.250 18:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.250 18:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:33.507 18:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:33.507 18:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:33.507 18:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.507 18:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:34.073 18:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:34.073 18:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:29:34.073 18:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:34.641 18:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:35.210 18:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:29:36.206 18:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:29:36.206 18:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:36.206 18:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.206 18:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:36.464 18:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:36.464 18:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:36.464 18:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.464 18:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:36.722 18:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.722 18:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:36.722 18:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.722 18:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:36.979 18:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.979 18:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:36.979 18:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.979 18:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:37.238 18:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:37.238 18:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:37.238 18:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:37.238 18:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:37.806 18:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:37.806 18:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:37.806 18:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:37.806 18:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:38.065 18:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.065 18:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:29:38.065 18:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:38.325 18:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:38.894 18:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:29:39.832 18:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:29:39.832 18:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:39.832 18:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:39.832 18:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:40.398 18:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:40.398 18:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:40.398 18:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:40.398 18:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:40.656 18:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:40.656 18:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:40.656 18:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:40.656 18:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:41.223 18:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.223 18:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:41.223 18:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.223 18:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:41.790 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.790 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:41.790 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.790 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:42.048 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:42.048 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:42.048 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:42.048 18:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:42.615 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:42.615 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:29:42.615 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:43.183 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:43.443 18:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:29:44.380 18:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:29:44.380 18:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:44.380 18:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:44.380 18:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:44.948 18:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:44.948 18:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:44.948 18:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:44.948 18:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:45.516 18:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:45.516 18:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:45.516 18:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:45.516 18:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:46.081 18:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:46.081 18:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:46.081 18:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:46.081 18:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:46.339 18:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:46.339 18:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:46.339 18:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:46.339 18:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:46.597 18:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:46.597 18:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:46.597 18:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:46.597 18:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:46.855 18:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:46.855 18:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:29:47.113 18:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:47.371 18:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:47.630 18:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:29:49.011 18:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:29:49.012 18:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:49.012 18:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:49.012 18:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:49.012 18:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:49.012 18:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:49.012 18:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:49.012 18:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:49.579 18:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:49.579 18:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:49.579 18:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:49.579 18:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:50.519 18:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:50.519 18:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:50.519 18:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:50.519 18:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:50.519 18:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:50.519 18:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:50.778 18:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:50.778 18:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:51.347 18:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:51.347 18:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:51.347 18:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:51.347 18:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:51.607 18:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:51.607 18:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:29:51.607 18:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:52.176 18:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:53.115 18:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:29:54.052 18:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:29:54.052 18:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:54.052 18:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:54.052 18:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:54.309 18:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:54.309 18:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:54.309 18:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:54.309 18:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:54.567 18:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:54.567 18:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:54.567 18:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:54.567 18:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:54.826 18:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:54.826 18:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:54.826 18:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:54.826 18:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:55.395 18:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:55.395 18:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:55.395 18:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:55.395 18:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:55.962 18:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:55.962 18:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:55.962 18:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:55.962 18:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:56.220 18:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:56.220 18:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:29:56.478 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:29:56.478 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:57.044 18:40:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:57.610 18:40:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:29:58.544 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:29:58.544 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:58.544 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:58.544 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:59.111 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:59.111 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:59.111 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:59.111 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:59.369 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:59.369 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:59.369 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:59.369 18:40:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:59.627 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:59.627 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:59.627 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:59.627 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:59.899 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:59.899 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:59.899 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:59.899 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:00.470 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:00.470 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:00.470 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:00.470 18:40:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:00.728 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:00.728 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:00.729 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:01.297 18:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:02.232 18:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:03.168 18:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:03.168 18:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:03.168 18:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:03.168 18:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:03.733 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:03.733 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:03.733 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:03.734 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:04.299 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:04.299 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:04.299 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:04.299 18:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:04.864 18:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:04.865 18:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:04.865 18:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:04.865 18:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:05.433 18:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:05.433 18:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:05.433 18:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:05.433 18:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:06.005 18:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:06.005 18:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:06.005 18:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:06.005 18:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:06.625 18:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:06.625 18:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:06.625 18:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:06.883 18:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:07.141 18:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:08.077 18:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:08.077 18:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:08.077 18:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.077 18:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:08.644 18:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:08.644 18:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:08.644 18:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.644 18:40:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:08.902 18:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:08.902 18:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:08.902 18:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:08.902 18:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:09.160 18:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:09.160 18:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:09.160 18:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:09.160 18:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:09.726 18:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:09.726 18:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:09.726 18:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:09.726 18:40:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:09.985 18:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:09.985 18:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:09.985 18:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:09.985 18:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:10.552 18:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:10.552 18:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:10.552 18:40:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:11.118 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:11.376 18:40:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:12.310 18:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:12.310 18:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:12.310 18:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.310 18:40:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:12.567 18:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:12.567 18:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:12.567 18:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:12.567 18:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:13.132 18:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:13.132 18:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:13.132 18:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.132 18:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:13.390 18:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.390 18:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:13.390 18:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.390 18:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:13.648 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.648 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:13.648 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.648 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:13.906 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:13.906 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:13.906 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.906 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:14.472 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:14.472 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1301736 00:30:14.472 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1301736 ']' 00:30:14.472 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1301736 00:30:14.472 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:14.472 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:14.472 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1301736 00:30:14.472 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:30:14.472 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:30:14.472 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1301736' 00:30:14.472 killing process with pid 1301736 00:30:14.472 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1301736 00:30:14.472 18:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1301736 00:30:14.472 { 00:30:14.472 "results": [ 00:30:14.472 { 00:30:14.472 "job": "Nvme0n1", 00:30:14.472 "core_mask": "0x4", 00:30:14.472 "workload": "verify", 00:30:14.472 "status": "terminated", 00:30:14.472 "verify_range": { 00:30:14.472 "start": 0, 00:30:14.472 "length": 16384 00:30:14.472 }, 00:30:14.472 "queue_depth": 128, 00:30:14.472 "io_size": 4096, 00:30:14.472 "runtime": 46.508123, 00:30:14.472 "iops": 4259.728994008208, 00:30:14.472 "mibps": 16.639566382844563, 00:30:14.472 "io_failed": 0, 00:30:14.472 "io_timeout": 0, 00:30:14.472 "avg_latency_us": 29998.96595799159, 00:30:14.472 "min_latency_us": 218.45333333333335, 00:30:14.472 "max_latency_us": 6089508.02962963 00:30:14.472 } 00:30:14.472 ], 00:30:14.472 "core_count": 1 00:30:14.472 } 00:30:14.742 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1301736 00:30:14.742 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:14.742 [2024-10-08 18:39:53.053007] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:30:14.742 [2024-10-08 18:39:53.053215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301736 ] 00:30:14.742 [2024-10-08 18:39:53.191052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.742 [2024-10-08 18:39:53.388046] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.742 Running I/O for 90 seconds... 00:30:14.742 4250.00 IOPS, 16.60 MiB/s [2024-10-08T16:40:43.279Z] 4365.00 IOPS, 17.05 MiB/s [2024-10-08T16:40:43.279Z] 4348.33 IOPS, 16.99 MiB/s [2024-10-08T16:40:43.279Z] 4380.25 IOPS, 17.11 MiB/s [2024-10-08T16:40:43.279Z] 4412.40 IOPS, 17.24 MiB/s [2024-10-08T16:40:43.279Z] 4449.50 IOPS, 17.38 MiB/s [2024-10-08T16:40:43.279Z] 4471.57 IOPS, 17.47 MiB/s [2024-10-08T16:40:43.279Z] 4475.62 IOPS, 17.48 MiB/s [2024-10-08T16:40:43.279Z] 4494.56 IOPS, 17.56 MiB/s [2024-10-08T16:40:43.279Z] 4507.70 IOPS, 17.61 MiB/s [2024-10-08T16:40:43.279Z] 4505.36 IOPS, 17.60 MiB/s [2024-10-08T16:40:43.279Z] 4489.08 IOPS, 17.54 MiB/s [2024-10-08T16:40:43.279Z] 4499.54 IOPS, 17.58 MiB/s [2024-10-08T16:40:43.279Z] 4508.86 IOPS, 17.61 MiB/s [2024-10-08T16:40:43.279Z] 4500.67 IOPS, 17.58 MiB/s [2024-10-08T16:40:43.279Z] 4498.94 IOPS, 17.57 MiB/s [2024-10-08T16:40:43.279Z] 4491.71 IOPS, 17.55 MiB/s [2024-10-08T16:40:43.279Z] 4500.06 IOPS, 17.58 MiB/s [2024-10-08T16:40:43.279Z] 4508.79 IOPS, 17.61 MiB/s [2024-10-08T16:40:43.279Z] [2024-10-08 18:40:15.757863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.742 [2024-10-08 18:40:15.757943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:14.742 [2024-10-08 18:40:15.758039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.742 [2024-10-08 18:40:15.758086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:14.742 [2024-10-08 18:40:15.758145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.742 [2024-10-08 18:40:15.758186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:14.742 [2024-10-08 18:40:15.758241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.742 [2024-10-08 18:40:15.758282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.742 [2024-10-08 18:40:15.758337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.742 [2024-10-08 18:40:15.758377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:14.742 [2024-10-08 18:40:15.758432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.742 [2024-10-08 18:40:15.758473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:14.742 [2024-10-08 18:40:15.758529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.742 [2024-10-08 18:40:15.758569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:14.742 [2024-10-08 18:40:15.758624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.742 [2024-10-08 18:40:15.758682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:14.742 [2024-10-08 18:40:15.760846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.742 [2024-10-08 18:40:15.760873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:14.742 [2024-10-08 18:40:15.760912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.742 [2024-10-08 18:40:15.760955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.761014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.761055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.761110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.761151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.761207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.761246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.761302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.761342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.761397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.761437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.761492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.761532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.763101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.763164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.763234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.763278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.763335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.763375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.763430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.763469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.763524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.763564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.763642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.763713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.763740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.763758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.763783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.763800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.763824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.763842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.763866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.763884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.763917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.763935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.763959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.763977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.764001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.764019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.764089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.764128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.764182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.764221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.764277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.764316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.764370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.764408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.764462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.764513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.764571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.764611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.764684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.764722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.764748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.764766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.764790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.764808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.764832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.764850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.764874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.764892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.764938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.764979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.765035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.765075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.765130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.765168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.765222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.765261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.765317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.765356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.765410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.765460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.765518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.765559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.765614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.765668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.765720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.765738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:14.743 [2024-10-08 18:40:15.765762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.743 [2024-10-08 18:40:15.765780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.765805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.765822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.765847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.765864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.765889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.765907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.765957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.765997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.766053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.766092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.766148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.766186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.766242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.744 [2024-10-08 18:40:15.766282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.767361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.767419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.767499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.767544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.767600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.767640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.767720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.767740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.767764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.767782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.767806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.767824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.767849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.767866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.767890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.767907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.767932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.767950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.767974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.768019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.768075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.768115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.768170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.768210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.768265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.768304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.768372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.768412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.768470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.768508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.768563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.768601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.768673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.768716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.768760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.768778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.768802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.768820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.768844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.768862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.768886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.768903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.768928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.768945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.768988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.769030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.769088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.769128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.769183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.769221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.769275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.769324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.769381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.769423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.769479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.769518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.769572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.769612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.769685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.769730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.769756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.769773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.769798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.744 [2024-10-08 18:40:15.769816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:14.744 [2024-10-08 18:40:15.769842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.769860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.769885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.769910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.769936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.769986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.770043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.770082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.770136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.770177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.770232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.770291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.770348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.770387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.770441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.770481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.770536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.770576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.770633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.770709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.770737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.770756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.770780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.770797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.770821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.770840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.770864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.770881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.770905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.770923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.770968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.771009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.771067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.771106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.771162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.771202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.771272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.771315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.771381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.771422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.771479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.771520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.771576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.771614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.771683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.771733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.771759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.771777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.771802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.771819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.771843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.771861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.771886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.771903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.771928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.771963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.772022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.772062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.772117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.772156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.772229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.772270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.772326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.772366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.772420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.772459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.772513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.772552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.772611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.772666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.772720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.772738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.772763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.772781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.772805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.772823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.772847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.745 [2024-10-08 18:40:15.772865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.772889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.745 [2024-10-08 18:40:15.772906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:14.745 [2024-10-08 18:40:15.772931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.772949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.774703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.774730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.774761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.774786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.774812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.774830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.774855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.774874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.774899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.774917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.774980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.775021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.775077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.775116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.775171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.775212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.775269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.775308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.775365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.775405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.775460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.775500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.775556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.775595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.775677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.775731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.775759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.775782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.775809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.775827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.775851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.775869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.775894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.775912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.775962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.776003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.776059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.776099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.776154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.776193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.776248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.776286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.776341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.776380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.776435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.776474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.776531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.776570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.776625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.776681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.776742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.776760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.776790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.776808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.776833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.776850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.776875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.776892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.776916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.776934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.776983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.777023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.777079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.777120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.777175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.777214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.777268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.746 [2024-10-08 18:40:15.777308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.746 [2024-10-08 18:40:15.777364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.777403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.777457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.777496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.777550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.777589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.777645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.777708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.777740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.777758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.777783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.777801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.777826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.777843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.777868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.777886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.777911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.777928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.777953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.778009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.778065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.778106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.778161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.778200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.778254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.778293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.778349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.778389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.778444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.778483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.778538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.778576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.778632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.778701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.778755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.778774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.778799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.778817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.778842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.778861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.780132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.747 [2024-10-08 18:40:15.780191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.780260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.780313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.780369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.780409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.780467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.780506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.780560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.780598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.780669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.780712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.780755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.780773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.780797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.780815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.780838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.780861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.780887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.780905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.780929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.780946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.780988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.781029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.781085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.781124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.781178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.781217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.781272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.781310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.781364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.781403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.781457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.781496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.781551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.781590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.781644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.747 [2024-10-08 18:40:15.781706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:14.747 [2024-10-08 18:40:15.781734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.781752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.781776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.781793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.781823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.781841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.781866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.781883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.781908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.781924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.781949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.781966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.782039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.782078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.782132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.782171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.782226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.782264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.782319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.782358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.782412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.782452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.782506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.782545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.782600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.782638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.782715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.782735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.782765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.782784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.782808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.782826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.782850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.782867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.782891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.782908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.782932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.782970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.783028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.783067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.783122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.783161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.783215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.783255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.783309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.783347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.783402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.783440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.783496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.783535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.783590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.783631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.783716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.783740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.783766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.783783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.783807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.783824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.783849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.783866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.783889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.783906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.783931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.783971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.784031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.784071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.784125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.784165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.784222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.784263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.784318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.784357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.784413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.784452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.784509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.784558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.784613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.784677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:14.748 [2024-10-08 18:40:15.784740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.748 [2024-10-08 18:40:15.784759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.784783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.784800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.784825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.784842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.784867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.784884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.784908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.784925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.784988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.785029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.785085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.785125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.785179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.785218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.785273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.785313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.785369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.785409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.785464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.785504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.785560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.785600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.785683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.785722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.785748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.785766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.785792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.749 [2024-10-08 18:40:15.785809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.787468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.787527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.787595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.787639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.787733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.787753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.787777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.787795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.787820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.787837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.787862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.787879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.787903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.787921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.787984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.788078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.788189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.788286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.788380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.788475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.788568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.788680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.788747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.788788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.788830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.788871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.788912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.788954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.788971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.789040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.789089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.789147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.789188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.789242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.789281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.789336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.789374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:14.749 [2024-10-08 18:40:15.789430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.749 [2024-10-08 18:40:15.789469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.789525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.789563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.789617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.789673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.789731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.789749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.789774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.789791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.789816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.789834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.789858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.789876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.789900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.789918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.789961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.790011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.790070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.790113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.790168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.790207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.790262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.790301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.790355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.790393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.790449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.790487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.790542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.790580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.790635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.790717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.790745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.790764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.790789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.790806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.790831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.790849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.790874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.790892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.790916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.790934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.790966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.791020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.791079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.791118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.791172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.791211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.791266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.791304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.791358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.791396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.791451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.791489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.791544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.791582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.791637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.791712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.792859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.792885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.792915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.750 [2024-10-08 18:40:15.792936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.792961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.793006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.793064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.793103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.793174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.793215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.793270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.793309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.793364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.793403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.793459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.793499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.793554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.793593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.793647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.793722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.793749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.750 [2024-10-08 18:40:15.793766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:14.750 [2024-10-08 18:40:15.793791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.793808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.793833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.793850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.793874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.793891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.793916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.793934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.794043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.794147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.794244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.794337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.794431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.794525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.794618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.794721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.794764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.794807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.794848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.794889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.794931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.794955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.795014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.795075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.795124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.795179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.795219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.795275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.795314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.795368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.795408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.795464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.795504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.795558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.795597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.795668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.795713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.795756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.795773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.795798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.795815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.795839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.795857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.795882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.795900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.795924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.795940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.796012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.796053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.796108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.796148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.796203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.796243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.796299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.796338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.796392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.796431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.796487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.796525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:14.751 [2024-10-08 18:40:15.796580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.751 [2024-10-08 18:40:15.796621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.796691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.796744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.796769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.796786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.796811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.796827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.796852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.796869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.796893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.796911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.796964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.797007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.797064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.797103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.797159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.797198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.797253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.797292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.797348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.797386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.797441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.797479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.797535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.797573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.797627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.797705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.797734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.797752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.797776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.797794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.797818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.797835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.797859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.797876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.797900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.797922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.797978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.798018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.798073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.798111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.798167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.798206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.798260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.798299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.798354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.798393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.798447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.798485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.798541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.798579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.800081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.752 [2024-10-08 18:40:15.800139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.800207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.800253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.800310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.800350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.800405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.800444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.800500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.800551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.800608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.800648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.800723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.800742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.800767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.800784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.800808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.800826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.800851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.800868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.800893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.800910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.800935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.800992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.801049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.801087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.801141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.801180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.801236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.752 [2024-10-08 18:40:15.801275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:14.752 [2024-10-08 18:40:15.801330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.801368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.801422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.801460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.801527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.801567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.801621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.801681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.801727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.801745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.801770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.801787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.801811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.801829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.801854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.801871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.801896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.801913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.801937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.801979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.802037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.802077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.802143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.802193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.802247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.802286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.802342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.802381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.802447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.802487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.802543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.802584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.802639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.802704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.802750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.802769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.802793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.802811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.802836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.802853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.802878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.802896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.802920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.802938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.803000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.803040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.803095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.803135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.803191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.803230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.803286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.803325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.803381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.803431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.803487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.803526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.803580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.803619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.803691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.803734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.803760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.803778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.803802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.803819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.803844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.803862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.803886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.803903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.803949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.803990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.804045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.804084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.804138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.804177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.804232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.804271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.804327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.804377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.805708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.805755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:14.753 [2024-10-08 18:40:15.805785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.753 [2024-10-08 18:40:15.805805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.805830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.754 [2024-10-08 18:40:15.805847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.805872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.805889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.805913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.805964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.806940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.806966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.807004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.807035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.807091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.807148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.807188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.807253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.807294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.807348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.807387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.807442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.807481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.807536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.807575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.807629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.807702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.807729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.807747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.807771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.807788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.807813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.807831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.807855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.807873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.807897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.807914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.807962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.808002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.808058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.808096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.808150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.808199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.808256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.808295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.808349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.808388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.808443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.808482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.808535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.808574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.808628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.808698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:14.754 [2024-10-08 18:40:15.808727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.754 [2024-10-08 18:40:15.808744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.808769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.808786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.808810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.808828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.808853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.808870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.808894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.808910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.808935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.808952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.809019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.809074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.809132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.809173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.809228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.809268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.809322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.809360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.809416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.809455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.809510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.809548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.809603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.809641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.809720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.809762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.809817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.809855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.809909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.809947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.810002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.810041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.810095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.810132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.810187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.810228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.810294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.810335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.810389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.810427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.810483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.810522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.810577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.810615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.810688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.810740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.810766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.810783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.810808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.810825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.810848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.810866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.810904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.810922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.810985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.811024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.811083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.811124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.812807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.812833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.812868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.812889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.812914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.755 [2024-10-08 18:40:15.812931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.812971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.813014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.813073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.813112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.813167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.813206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.755 [2024-10-08 18:40:15.813260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.755 [2024-10-08 18:40:15.813300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.813354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.813393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.813448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.813487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.813542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.813581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.813636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.813702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.813747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.813766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.813791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.813809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.813833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.813856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.813882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.813900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.813925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.813943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.814015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.814053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.814108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.814148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.814203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.814242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.814296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.814335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.814390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.814429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.814483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.814521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.814574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.814614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.814687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.814738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.814764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.814781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.814806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.814828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.814854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.814872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.814896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.814913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.814958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.814999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.815094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.815187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.815281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.815372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.815466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.815558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.815667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.815738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.815780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.815828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.815869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.815910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.815952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.815976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.816020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.816076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.816115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.816170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.816210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.816265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.816303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:14.756 [2024-10-08 18:40:15.816358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.756 [2024-10-08 18:40:15.816397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.816451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.816490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.816545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.816584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.816638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.816705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.816736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.816754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.816779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.816796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.816820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.816838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.816862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.816879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.816904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.816922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.817480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.817538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.817647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.817716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.817747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.817764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.817793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.757 [2024-10-08 18:40:15.817810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.817838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.817856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.817883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.817900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.817929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.817981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.818046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.818097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.818163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.818203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.818286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.818328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.818392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.818431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.818494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.818534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.818597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.818635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.818731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.818751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.818781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.818798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.818826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.818844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.818872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.818889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.818918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.818965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.819030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.819069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.819132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.819182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.819248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.819287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.819349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.819388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.819454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.819493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.819557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.819595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.819675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.819714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.819744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.819762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.819789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.819806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.819834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.819851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.819879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.819896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.819924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.819942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.819989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.820030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.820095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.757 [2024-10-08 18:40:15.820134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:14.757 [2024-10-08 18:40:15.820208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.820248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.820312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.820351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.820414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.820453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.820517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.820555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.820618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.820675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.820740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.820758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.820786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.820804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.820832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.820849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.820877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.820894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.820922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.820974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.821038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.821077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.821142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.821181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.821255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.821296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.821359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.821399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.821463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.821502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.821566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.821605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.821700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.821720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.821749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.821767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.821796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.821813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.821841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.821859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.821886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.821903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.821932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.821949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.822027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.822067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.822129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.822168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.822232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.822281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.822347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.822386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.822449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.822489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.822551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.822590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.822672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.822729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.822760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.822778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.822805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.822822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.822850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.822868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.823264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.823319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.823403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.823446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.823521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.823561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.823635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.823697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.823757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.823782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.823816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.823834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.823867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.823884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.823917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.823967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:14.758 [2024-10-08 18:40:15.824044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.758 [2024-10-08 18:40:15.824084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:14.759 4399.35 IOPS, 17.18 MiB/s [2024-10-08T16:40:43.296Z] 4189.86 IOPS, 16.37 MiB/s [2024-10-08T16:40:43.296Z] 3999.41 IOPS, 15.62 MiB/s [2024-10-08T16:40:43.296Z] 3825.52 IOPS, 14.94 MiB/s [2024-10-08T16:40:43.296Z] 3666.12 IOPS, 14.32 MiB/s [2024-10-08T16:40:43.296Z] 3519.48 IOPS, 13.75 MiB/s [2024-10-08T16:40:43.296Z] 3460.54 IOPS, 13.52 MiB/s [2024-10-08T16:40:43.296Z] 3497.26 IOPS, 13.66 MiB/s [2024-10-08T16:40:43.296Z] 3533.54 IOPS, 13.80 MiB/s [2024-10-08T16:40:43.296Z] 3570.72 IOPS, 13.95 MiB/s [2024-10-08T16:40:43.296Z] 3645.77 IOPS, 14.24 MiB/s [2024-10-08T16:40:43.296Z] 3711.58 IOPS, 14.50 MiB/s [2024-10-08T16:40:43.296Z] 3795.09 IOPS, 14.82 MiB/s [2024-10-08T16:40:43.296Z] 3881.64 IOPS, 15.16 MiB/s [2024-10-08T16:40:43.296Z] 3928.50 IOPS, 15.35 MiB/s [2024-10-08T16:40:43.296Z] 3939.94 IOPS, 15.39 MiB/s [2024-10-08T16:40:43.296Z] 3956.92 IOPS, 15.46 MiB/s [2024-10-08T16:40:43.296Z] 3972.14 IOPS, 15.52 MiB/s [2024-10-08T16:40:43.296Z] 3988.11 IOPS, 15.58 MiB/s [2024-10-08T16:40:43.296Z] 4007.33 IOPS, 15.65 MiB/s [2024-10-08T16:40:43.296Z] 4044.93 IOPS, 15.80 MiB/s [2024-10-08T16:40:43.296Z] 4096.85 IOPS, 16.00 MiB/s [2024-10-08T16:40:43.296Z] 4169.10 IOPS, 16.29 MiB/s [2024-10-08T16:40:43.296Z] 4214.65 IOPS, 16.46 MiB/s [2024-10-08T16:40:43.296Z] [2024-10-08 18:40:39.691836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.691912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.692038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.692103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.692166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.692207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.692264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.692304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.692361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.692401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.692458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.692517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.692576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.692615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.692699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.692720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.692745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.692763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.692788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.692806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.692831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.692848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.692873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.692891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.692960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.693001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.693055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.693094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.693149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.693190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.693245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.693284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.693339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.693378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.693434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.693483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.693541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.693581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.693635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.693703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.693730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.693748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.693773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.693790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.693815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.693832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.693856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.693874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.693916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.693956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.694013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.694052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.694109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.694149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.696903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.696932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.696982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.697007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.697038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.759 [2024-10-08 18:40:39.697061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:14.759 [2024-10-08 18:40:39.697105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.759 [2024-10-08 18:40:39.697129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.697159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.697181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.697212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.697233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.697263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.697285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.697315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.697337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.697368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.697390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.697420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.697442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.697472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.760 [2024-10-08 18:40:39.697494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.697524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.760 [2024-10-08 18:40:39.697546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.697576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.760 [2024-10-08 18:40:39.697598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.697629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.760 [2024-10-08 18:40:39.697660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.699174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.760 [2024-10-08 18:40:39.699208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.699253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.760 [2024-10-08 18:40:39.699278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.699309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.760 [2024-10-08 18:40:39.699331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.699361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.699384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.699414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.699436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.699466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.699488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.699518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.699539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.699570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.699592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.699621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.699642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.699697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.699717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:14.760 [2024-10-08 18:40:39.699743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:14.760 [2024-10-08 18:40:39.699761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:14.760 4236.11 IOPS, 16.55 MiB/s [2024-10-08T16:40:43.297Z] 4241.76 IOPS, 16.57 MiB/s [2024-10-08T16:40:43.297Z] 4255.85 IOPS, 16.62 MiB/s [2024-10-08T16:40:43.297Z] Received shutdown signal, test time was about 46.509753 seconds 00:30:14.760 00:30:14.760 Latency(us) 00:30:14.760 [2024-10-08T16:40:43.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.760 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:14.760 Verification LBA range: start 0x0 length 0x4000 00:30:14.760 Nvme0n1 : 46.51 4259.73 16.64 0.00 0.00 29998.97 218.45 6089508.03 00:30:14.760 [2024-10-08T16:40:43.297Z] =================================================================================================================== 00:30:14.760 [2024-10-08T16:40:43.297Z] Total : 4259.73 16.64 0.00 0.00 29998.97 218.45 6089508.03 00:30:14.760 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:15.326 rmmod nvme_tcp 00:30:15.326 rmmod nvme_fabrics 00:30:15.326 rmmod nvme_keyring 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 1301324 ']' 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 1301324 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1301324 ']' 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1301324 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1301324 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1301324' 00:30:15.326 killing process with pid 1301324 00:30:15.326 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1301324 00:30:15.327 18:40:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1301324 00:30:15.896 18:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:15.896 18:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:15.896 18:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:15.896 18:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:30:15.896 18:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:30:15.896 18:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:15.896 18:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:30:15.896 18:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:15.896 18:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:15.896 18:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.896 18:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.896 18:40:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.803 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:17.803 00:30:17.803 real 0m59.633s 00:30:17.803 user 3m7.405s 00:30:17.803 sys 0m15.260s 00:30:17.803 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:17.803 18:40:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:17.803 ************************************ 00:30:17.803 END TEST nvmf_host_multipath_status 00:30:17.803 ************************************ 00:30:17.803 18:40:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:17.803 18:40:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:17.803 18:40:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:17.803 18:40:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.064 ************************************ 00:30:18.064 START TEST nvmf_discovery_remove_ifc 00:30:18.064 ************************************ 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:18.064 * Looking for test storage... 00:30:18.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:18.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.064 --rc genhtml_branch_coverage=1 00:30:18.064 --rc genhtml_function_coverage=1 00:30:18.064 --rc genhtml_legend=1 00:30:18.064 --rc geninfo_all_blocks=1 00:30:18.064 --rc geninfo_unexecuted_blocks=1 00:30:18.064 00:30:18.064 ' 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:18.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.064 --rc genhtml_branch_coverage=1 00:30:18.064 --rc genhtml_function_coverage=1 00:30:18.064 --rc genhtml_legend=1 00:30:18.064 --rc geninfo_all_blocks=1 00:30:18.064 --rc geninfo_unexecuted_blocks=1 00:30:18.064 00:30:18.064 ' 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:18.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.064 --rc genhtml_branch_coverage=1 00:30:18.064 --rc genhtml_function_coverage=1 00:30:18.064 --rc genhtml_legend=1 00:30:18.064 --rc geninfo_all_blocks=1 00:30:18.064 --rc geninfo_unexecuted_blocks=1 00:30:18.064 00:30:18.064 ' 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:18.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.064 --rc genhtml_branch_coverage=1 00:30:18.064 --rc genhtml_function_coverage=1 00:30:18.064 --rc genhtml_legend=1 00:30:18.064 --rc geninfo_all_blocks=1 00:30:18.064 --rc geninfo_unexecuted_blocks=1 00:30:18.064 00:30:18.064 ' 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.064 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:18.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:30:18.065 18:40:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:30:21.359 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:21.360 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:21.360 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:21.360 Found net devices under 0000:84:00.0: cvl_0_0 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:21.360 Found net devices under 0000:84:00.1: cvl_0_1 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:21.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:21.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:30:21.360 00:30:21.360 --- 10.0.0.2 ping statistics --- 00:30:21.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.360 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:21.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:21.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:30:21.360 00:30:21.360 --- 10.0.0.1 ping statistics --- 00:30:21.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.360 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=1309642 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 1309642 00:30:21.360 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1309642 ']' 00:30:21.361 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.361 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:21.361 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.361 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:21.361 18:40:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:21.361 [2024-10-08 18:40:49.693603] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:30:21.361 [2024-10-08 18:40:49.693771] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.361 [2024-10-08 18:40:49.832067] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.621 [2024-10-08 18:40:50.026299] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.621 [2024-10-08 18:40:50.026373] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.621 [2024-10-08 18:40:50.026390] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.621 [2024-10-08 18:40:50.026404] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.621 [2024-10-08 18:40:50.026416] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.621 [2024-10-08 18:40:50.027142] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:21.880 [2024-10-08 18:40:50.289990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.880 [2024-10-08 18:40:50.298905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:21.880 null0 00:30:21.880 [2024-10-08 18:40:50.331575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1309781 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1309781 /tmp/host.sock 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1309781 ']' 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:21.880 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:21.880 18:40:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:21.880 [2024-10-08 18:40:50.415052] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:30:21.880 [2024-10-08 18:40:50.415193] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309781 ] 00:30:22.139 [2024-10-08 18:40:50.522849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.398 [2024-10-08 18:40:50.753428] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.657 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:22.657 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:30:22.657 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:22.657 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:22.657 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.657 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:22.657 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.657 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:22.657 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.657 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:22.917 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.917 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:22.917 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.917 18:40:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:23.856 [2024-10-08 18:40:52.343262] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:23.856 [2024-10-08 18:40:52.343337] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:23.856 [2024-10-08 18:40:52.343398] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:24.116 [2024-10-08 18:40:52.472023] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:24.116 [2024-10-08 18:40:52.535160] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:24.116 [2024-10-08 18:40:52.535304] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:24.116 [2024-10-08 18:40:52.535395] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:24.116 [2024-10-08 18:40:52.535452] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:24.116 [2024-10-08 18:40:52.535515] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:24.116 [2024-10-08 18:40:52.540163] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7c76a0 was disconnected and freed. delete nvme_qpair. 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:24.116 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:24.376 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.376 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:24.376 18:40:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:25.313 18:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:25.313 18:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:25.313 18:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:25.313 18:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.313 18:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:25.313 18:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:25.313 18:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:25.313 18:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.314 18:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:25.314 18:40:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:26.695 18:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:26.695 18:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:26.695 18:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.695 18:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:26.695 18:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:26.695 18:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:26.695 18:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:26.695 18:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.695 18:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:26.695 18:40:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:27.634 18:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:27.634 18:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:27.634 18:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.634 18:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:27.634 18:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:27.634 18:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:27.634 18:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:27.634 18:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.634 18:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:27.634 18:40:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:28.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:28.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:28.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:28.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:28.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:28.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:28.572 18:40:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.572 18:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:28.572 18:40:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:29.509 [2024-10-08 18:40:57.974283] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:29.509 [2024-10-08 18:40:57.974424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.509 [2024-10-08 18:40:57.974476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.509 [2024-10-08 18:40:57.974519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.509 [2024-10-08 18:40:57.974555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.509 [2024-10-08 18:40:57.974592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.509 [2024-10-08 18:40:57.974627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.509 [2024-10-08 18:40:57.974685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.509 [2024-10-08 18:40:57.974725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.509 [2024-10-08 18:40:57.974761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.509 [2024-10-08 18:40:57.974796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.509 [2024-10-08 18:40:57.974829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a3fd0 is same with the state(6) to be set 00:30:29.509 [2024-10-08 18:40:57.984298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a3fd0 (9): Bad file descriptor 00:30:29.509 [2024-10-08 18:40:57.994370] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:29.768 18:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:29.768 18:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:29.768 18:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:29.768 18:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.768 18:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:29.768 18:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:29.768 18:40:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:30.705 [2024-10-08 18:40:59.046756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:30.705 [2024-10-08 18:40:59.046911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a3fd0 with addr=10.0.0.2, port=4420 00:30:30.705 [2024-10-08 18:40:59.046971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a3fd0 is same with the state(6) to be set 00:30:30.705 [2024-10-08 18:40:59.047071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a3fd0 (9): Bad file descriptor 00:30:30.705 [2024-10-08 18:40:59.048144] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:30.705 [2024-10-08 18:40:59.048249] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:30.705 [2024-10-08 18:40:59.048292] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:30.705 [2024-10-08 18:40:59.048332] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:30.705 [2024-10-08 18:40:59.048434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.705 [2024-10-08 18:40:59.048486] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:30.705 18:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.705 18:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:30.705 18:40:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:31.639 [2024-10-08 18:41:00.051030] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:31.639 [2024-10-08 18:41:00.051092] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:31.639 [2024-10-08 18:41:00.051111] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:31.639 [2024-10-08 18:41:00.051129] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:30:31.639 [2024-10-08 18:41:00.051164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:31.639 [2024-10-08 18:41:00.051235] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:31.639 [2024-10-08 18:41:00.051341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.639 [2024-10-08 18:41:00.051400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.639 [2024-10-08 18:41:00.051444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.639 [2024-10-08 18:41:00.051479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.639 [2024-10-08 18:41:00.051514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.639 [2024-10-08 18:41:00.051548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.639 [2024-10-08 18:41:00.051583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.639 [2024-10-08 18:41:00.051618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.639 [2024-10-08 18:41:00.051671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.639 [2024-10-08 18:41:00.051714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.639 [2024-10-08 18:41:00.051748] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:31.639 [2024-10-08 18:41:00.052012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x793300 (9): Bad file descriptor 00:30:31.639 [2024-10-08 18:41:00.053046] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:31.639 [2024-10-08 18:41:00.053101] nvme_ctrlr.c:1233:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:31.639 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:31.639 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:31.639 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:31.639 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.639 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:31.639 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:31.639 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:31.639 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.639 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:31.639 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.639 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.897 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:31.897 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:31.897 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:31.897 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:31.897 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.897 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:31.897 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:31.897 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:31.897 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.897 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:31.897 18:41:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:32.831 18:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:32.831 18:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:32.831 18:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.831 18:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:32.831 18:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:32.831 18:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:32.831 18:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:32.831 18:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.831 18:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:32.831 18:41:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:33.765 [2024-10-08 18:41:02.111013] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:33.765 [2024-10-08 18:41:02.111076] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:33.765 [2024-10-08 18:41:02.111132] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:33.765 [2024-10-08 18:41:02.238697] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:34.023 18:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:34.023 18:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:34.023 18:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.023 18:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:34.023 18:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:34.023 18:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:34.023 18:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:34.023 18:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.023 [2024-10-08 18:41:02.341849] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:34.023 [2024-10-08 18:41:02.341905] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:34.023 [2024-10-08 18:41:02.341992] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:34.023 [2024-10-08 18:41:02.342049] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:34.023 [2024-10-08 18:41:02.342083] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:34.023 [2024-10-08 18:41:02.348193] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7ae480 was disconnected and freed. delete nvme_qpair. 00:30:34.023 18:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:34.023 18:41:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1309781 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1309781 ']' 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1309781 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:34.958 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1309781 00:30:35.216 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:35.216 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:35.216 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1309781' 00:30:35.216 killing process with pid 1309781 00:30:35.216 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1309781 00:30:35.216 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1309781 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:35.474 rmmod nvme_tcp 00:30:35.474 rmmod nvme_fabrics 00:30:35.474 rmmod nvme_keyring 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 1309642 ']' 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 1309642 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1309642 ']' 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1309642 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1309642 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:35.474 18:41:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1309642' 00:30:35.474 killing process with pid 1309642 00:30:35.474 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1309642 00:30:35.474 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1309642 00:30:36.040 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:36.040 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:36.040 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:36.040 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:30:36.040 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:30:36.040 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:36.040 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:30:36.040 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:36.040 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:36.040 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.040 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.040 18:41:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.581 00:30:38.581 real 0m20.117s 00:30:38.581 user 0m28.721s 00:30:38.581 sys 0m4.344s 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:38.581 ************************************ 00:30:38.581 END TEST nvmf_discovery_remove_ifc 00:30:38.581 ************************************ 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.581 ************************************ 00:30:38.581 START TEST nvmf_identify_kernel_target 00:30:38.581 ************************************ 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:38.581 * Looking for test storage... 00:30:38.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:38.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.581 --rc genhtml_branch_coverage=1 00:30:38.581 --rc genhtml_function_coverage=1 00:30:38.581 --rc genhtml_legend=1 00:30:38.581 --rc geninfo_all_blocks=1 00:30:38.581 --rc geninfo_unexecuted_blocks=1 00:30:38.581 00:30:38.581 ' 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:38.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.581 --rc genhtml_branch_coverage=1 00:30:38.581 --rc genhtml_function_coverage=1 00:30:38.581 --rc genhtml_legend=1 00:30:38.581 --rc geninfo_all_blocks=1 00:30:38.581 --rc geninfo_unexecuted_blocks=1 00:30:38.581 00:30:38.581 ' 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:38.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.581 --rc genhtml_branch_coverage=1 00:30:38.581 --rc genhtml_function_coverage=1 00:30:38.581 --rc genhtml_legend=1 00:30:38.581 --rc geninfo_all_blocks=1 00:30:38.581 --rc geninfo_unexecuted_blocks=1 00:30:38.581 00:30:38.581 ' 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:38.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.581 --rc genhtml_branch_coverage=1 00:30:38.581 --rc genhtml_function_coverage=1 00:30:38.581 --rc genhtml_legend=1 00:30:38.581 --rc geninfo_all_blocks=1 00:30:38.581 --rc geninfo_unexecuted_blocks=1 00:30:38.581 00:30:38.581 ' 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.581 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:38.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.582 18:41:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:41.177 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:41.177 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:41.177 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:41.178 Found net devices under 0000:84:00.0: cvl_0_0 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:41.178 Found net devices under 0000:84:00.1: cvl_0_1 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.178 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.437 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.437 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:41.437 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:41.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:30:41.437 00:30:41.438 --- 10.0.0.2 ping statistics --- 00:30:41.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.438 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:30:41.438 00:30:41.438 --- 10.0.0.1 ping statistics --- 00:30:41.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.438 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:41.438 18:41:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:43.349 Waiting for block devices as requested 00:30:43.349 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:30:43.349 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:43.349 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:43.349 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:43.609 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:43.609 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:43.609 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:43.869 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:43.869 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:43.869 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:44.129 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:44.129 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:44.129 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:44.389 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:44.389 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:44.389 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:44.389 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:44.649 No valid GPT data, bailing 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:44.649 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:44.910 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:44.910 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:30:44.910 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:30:44.910 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:30:44.910 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:30:44.910 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:30:44.910 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:30:44.910 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:30:44.910 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:44.910 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:30:44.910 00:30:44.910 Discovery Log Number of Records 2, Generation counter 2 00:30:44.910 =====Discovery Log Entry 0====== 00:30:44.910 trtype: tcp 00:30:44.910 adrfam: ipv4 00:30:44.910 subtype: current discovery subsystem 00:30:44.910 treq: not specified, sq flow control disable supported 00:30:44.910 portid: 1 00:30:44.910 trsvcid: 4420 00:30:44.910 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:44.910 traddr: 10.0.0.1 00:30:44.910 eflags: none 00:30:44.910 sectype: none 00:30:44.910 =====Discovery Log Entry 1====== 00:30:44.910 trtype: tcp 00:30:44.910 adrfam: ipv4 00:30:44.910 subtype: nvme subsystem 00:30:44.910 treq: not specified, sq flow control disable supported 00:30:44.910 portid: 1 00:30:44.910 trsvcid: 4420 00:30:44.910 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:44.910 traddr: 10.0.0.1 00:30:44.910 eflags: none 00:30:44.910 sectype: none 00:30:44.910 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:30:44.910 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:30:44.910 ===================================================== 00:30:44.910 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:44.910 ===================================================== 00:30:44.910 Controller Capabilities/Features 00:30:44.910 ================================ 00:30:44.910 Vendor ID: 0000 00:30:44.910 Subsystem Vendor ID: 0000 00:30:44.910 Serial Number: 5f4a53c4fd1ad4f722c7 00:30:44.910 Model Number: Linux 00:30:44.910 Firmware Version: 6.8.9-20 00:30:44.910 Recommended Arb Burst: 0 00:30:44.910 IEEE OUI Identifier: 00 00 00 00:30:44.910 Multi-path I/O 00:30:44.910 May have multiple subsystem ports: No 00:30:44.910 May have multiple controllers: No 00:30:44.910 Associated with SR-IOV VF: No 00:30:44.910 Max Data Transfer Size: Unlimited 00:30:44.910 Max Number of Namespaces: 0 00:30:44.910 Max Number of I/O Queues: 1024 00:30:44.910 NVMe Specification Version (VS): 1.3 00:30:44.910 NVMe Specification Version (Identify): 1.3 00:30:44.910 Maximum Queue Entries: 1024 00:30:44.910 Contiguous Queues Required: No 00:30:44.910 Arbitration Mechanisms Supported 00:30:44.910 Weighted Round Robin: Not Supported 00:30:44.910 Vendor Specific: Not Supported 00:30:44.910 Reset Timeout: 7500 ms 00:30:44.910 Doorbell Stride: 4 bytes 00:30:44.910 NVM Subsystem Reset: Not Supported 00:30:44.910 Command Sets Supported 00:30:44.910 NVM Command Set: Supported 00:30:44.910 Boot Partition: Not Supported 00:30:44.910 Memory Page Size Minimum: 4096 bytes 00:30:44.910 Memory Page Size Maximum: 4096 bytes 00:30:44.910 Persistent Memory Region: Not Supported 00:30:44.910 Optional Asynchronous Events Supported 00:30:44.910 Namespace Attribute Notices: Not Supported 00:30:44.910 Firmware Activation Notices: Not Supported 00:30:44.910 ANA Change Notices: Not Supported 00:30:44.910 PLE Aggregate Log Change Notices: Not Supported 00:30:44.910 LBA Status Info Alert Notices: Not Supported 00:30:44.910 EGE Aggregate Log Change Notices: Not Supported 00:30:44.910 Normal NVM Subsystem Shutdown event: Not Supported 00:30:44.910 Zone Descriptor Change Notices: Not Supported 00:30:44.910 Discovery Log Change Notices: Supported 00:30:44.910 Controller Attributes 00:30:44.910 128-bit Host Identifier: Not Supported 00:30:44.910 Non-Operational Permissive Mode: Not Supported 00:30:44.910 NVM Sets: Not Supported 00:30:44.910 Read Recovery Levels: Not Supported 00:30:44.910 Endurance Groups: Not Supported 00:30:44.910 Predictable Latency Mode: Not Supported 00:30:44.910 Traffic Based Keep ALive: Not Supported 00:30:44.910 Namespace Granularity: Not Supported 00:30:44.910 SQ Associations: Not Supported 00:30:44.910 UUID List: Not Supported 00:30:44.911 Multi-Domain Subsystem: Not Supported 00:30:44.911 Fixed Capacity Management: Not Supported 00:30:44.911 Variable Capacity Management: Not Supported 00:30:44.911 Delete Endurance Group: Not Supported 00:30:44.911 Delete NVM Set: Not Supported 00:30:44.911 Extended LBA Formats Supported: Not Supported 00:30:44.911 Flexible Data Placement Supported: Not Supported 00:30:44.911 00:30:44.911 Controller Memory Buffer Support 00:30:44.911 ================================ 00:30:44.911 Supported: No 00:30:44.911 00:30:44.911 Persistent Memory Region Support 00:30:44.911 ================================ 00:30:44.911 Supported: No 00:30:44.911 00:30:44.911 Admin Command Set Attributes 00:30:44.911 ============================ 00:30:44.911 Security Send/Receive: Not Supported 00:30:44.911 Format NVM: Not Supported 00:30:44.911 Firmware Activate/Download: Not Supported 00:30:44.911 Namespace Management: Not Supported 00:30:44.911 Device Self-Test: Not Supported 00:30:44.911 Directives: Not Supported 00:30:44.911 NVMe-MI: Not Supported 00:30:44.911 Virtualization Management: Not Supported 00:30:44.911 Doorbell Buffer Config: Not Supported 00:30:44.911 Get LBA Status Capability: Not Supported 00:30:44.911 Command & Feature Lockdown Capability: Not Supported 00:30:44.911 Abort Command Limit: 1 00:30:44.911 Async Event Request Limit: 1 00:30:44.911 Number of Firmware Slots: N/A 00:30:44.911 Firmware Slot 1 Read-Only: N/A 00:30:44.911 Firmware Activation Without Reset: N/A 00:30:44.911 Multiple Update Detection Support: N/A 00:30:44.911 Firmware Update Granularity: No Information Provided 00:30:44.911 Per-Namespace SMART Log: No 00:30:44.911 Asymmetric Namespace Access Log Page: Not Supported 00:30:44.911 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:44.911 Command Effects Log Page: Not Supported 00:30:44.911 Get Log Page Extended Data: Supported 00:30:44.911 Telemetry Log Pages: Not Supported 00:30:44.911 Persistent Event Log Pages: Not Supported 00:30:44.911 Supported Log Pages Log Page: May Support 00:30:44.911 Commands Supported & Effects Log Page: Not Supported 00:30:44.911 Feature Identifiers & Effects Log Page:May Support 00:30:44.911 NVMe-MI Commands & Effects Log Page: May Support 00:30:44.911 Data Area 4 for Telemetry Log: Not Supported 00:30:44.911 Error Log Page Entries Supported: 1 00:30:44.911 Keep Alive: Not Supported 00:30:44.911 00:30:44.911 NVM Command Set Attributes 00:30:44.911 ========================== 00:30:44.911 Submission Queue Entry Size 00:30:44.911 Max: 1 00:30:44.911 Min: 1 00:30:44.911 Completion Queue Entry Size 00:30:44.911 Max: 1 00:30:44.911 Min: 1 00:30:44.911 Number of Namespaces: 0 00:30:44.911 Compare Command: Not Supported 00:30:44.911 Write Uncorrectable Command: Not Supported 00:30:44.911 Dataset Management Command: Not Supported 00:30:44.911 Write Zeroes Command: Not Supported 00:30:44.911 Set Features Save Field: Not Supported 00:30:44.911 Reservations: Not Supported 00:30:44.911 Timestamp: Not Supported 00:30:44.911 Copy: Not Supported 00:30:44.911 Volatile Write Cache: Not Present 00:30:44.911 Atomic Write Unit (Normal): 1 00:30:44.911 Atomic Write Unit (PFail): 1 00:30:44.911 Atomic Compare & Write Unit: 1 00:30:44.911 Fused Compare & Write: Not Supported 00:30:44.911 Scatter-Gather List 00:30:44.911 SGL Command Set: Supported 00:30:44.911 SGL Keyed: Not Supported 00:30:44.911 SGL Bit Bucket Descriptor: Not Supported 00:30:44.911 SGL Metadata Pointer: Not Supported 00:30:44.911 Oversized SGL: Not Supported 00:30:44.911 SGL Metadata Address: Not Supported 00:30:44.911 SGL Offset: Supported 00:30:44.911 Transport SGL Data Block: Not Supported 00:30:44.911 Replay Protected Memory Block: Not Supported 00:30:44.911 00:30:44.911 Firmware Slot Information 00:30:44.911 ========================= 00:30:44.911 Active slot: 0 00:30:44.911 00:30:44.911 00:30:44.911 Error Log 00:30:44.911 ========= 00:30:44.911 00:30:44.911 Active Namespaces 00:30:44.911 ================= 00:30:44.911 Discovery Log Page 00:30:44.911 ================== 00:30:44.911 Generation Counter: 2 00:30:44.911 Number of Records: 2 00:30:44.911 Record Format: 0 00:30:44.911 00:30:44.911 Discovery Log Entry 0 00:30:44.911 ---------------------- 00:30:44.911 Transport Type: 3 (TCP) 00:30:44.911 Address Family: 1 (IPv4) 00:30:44.911 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:44.911 Entry Flags: 00:30:44.911 Duplicate Returned Information: 0 00:30:44.911 Explicit Persistent Connection Support for Discovery: 0 00:30:44.911 Transport Requirements: 00:30:44.911 Secure Channel: Not Specified 00:30:44.911 Port ID: 1 (0x0001) 00:30:44.911 Controller ID: 65535 (0xffff) 00:30:44.911 Admin Max SQ Size: 32 00:30:44.911 Transport Service Identifier: 4420 00:30:44.911 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:44.911 Transport Address: 10.0.0.1 00:30:44.911 Discovery Log Entry 1 00:30:44.911 ---------------------- 00:30:44.911 Transport Type: 3 (TCP) 00:30:44.911 Address Family: 1 (IPv4) 00:30:44.911 Subsystem Type: 2 (NVM Subsystem) 00:30:44.911 Entry Flags: 00:30:44.911 Duplicate Returned Information: 0 00:30:44.911 Explicit Persistent Connection Support for Discovery: 0 00:30:44.911 Transport Requirements: 00:30:44.911 Secure Channel: Not Specified 00:30:44.911 Port ID: 1 (0x0001) 00:30:44.911 Controller ID: 65535 (0xffff) 00:30:44.911 Admin Max SQ Size: 32 00:30:44.911 Transport Service Identifier: 4420 00:30:44.911 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:30:44.911 Transport Address: 10.0.0.1 00:30:45.171 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:45.171 get_feature(0x01) failed 00:30:45.171 get_feature(0x02) failed 00:30:45.171 get_feature(0x04) failed 00:30:45.171 ===================================================== 00:30:45.171 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:45.171 ===================================================== 00:30:45.171 Controller Capabilities/Features 00:30:45.171 ================================ 00:30:45.171 Vendor ID: 0000 00:30:45.171 Subsystem Vendor ID: 0000 00:30:45.171 Serial Number: 442357d4012faf1ef07b 00:30:45.171 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:30:45.171 Firmware Version: 6.8.9-20 00:30:45.171 Recommended Arb Burst: 6 00:30:45.171 IEEE OUI Identifier: 00 00 00 00:30:45.171 Multi-path I/O 00:30:45.171 May have multiple subsystem ports: Yes 00:30:45.171 May have multiple controllers: Yes 00:30:45.171 Associated with SR-IOV VF: No 00:30:45.171 Max Data Transfer Size: Unlimited 00:30:45.171 Max Number of Namespaces: 1024 00:30:45.171 Max Number of I/O Queues: 128 00:30:45.171 NVMe Specification Version (VS): 1.3 00:30:45.171 NVMe Specification Version (Identify): 1.3 00:30:45.171 Maximum Queue Entries: 1024 00:30:45.171 Contiguous Queues Required: No 00:30:45.171 Arbitration Mechanisms Supported 00:30:45.171 Weighted Round Robin: Not Supported 00:30:45.171 Vendor Specific: Not Supported 00:30:45.171 Reset Timeout: 7500 ms 00:30:45.171 Doorbell Stride: 4 bytes 00:30:45.171 NVM Subsystem Reset: Not Supported 00:30:45.171 Command Sets Supported 00:30:45.171 NVM Command Set: Supported 00:30:45.171 Boot Partition: Not Supported 00:30:45.171 Memory Page Size Minimum: 4096 bytes 00:30:45.171 Memory Page Size Maximum: 4096 bytes 00:30:45.171 Persistent Memory Region: Not Supported 00:30:45.171 Optional Asynchronous Events Supported 00:30:45.171 Namespace Attribute Notices: Supported 00:30:45.171 Firmware Activation Notices: Not Supported 00:30:45.171 ANA Change Notices: Supported 00:30:45.171 PLE Aggregate Log Change Notices: Not Supported 00:30:45.171 LBA Status Info Alert Notices: Not Supported 00:30:45.171 EGE Aggregate Log Change Notices: Not Supported 00:30:45.171 Normal NVM Subsystem Shutdown event: Not Supported 00:30:45.171 Zone Descriptor Change Notices: Not Supported 00:30:45.171 Discovery Log Change Notices: Not Supported 00:30:45.171 Controller Attributes 00:30:45.171 128-bit Host Identifier: Supported 00:30:45.171 Non-Operational Permissive Mode: Not Supported 00:30:45.171 NVM Sets: Not Supported 00:30:45.171 Read Recovery Levels: Not Supported 00:30:45.171 Endurance Groups: Not Supported 00:30:45.171 Predictable Latency Mode: Not Supported 00:30:45.171 Traffic Based Keep ALive: Supported 00:30:45.171 Namespace Granularity: Not Supported 00:30:45.171 SQ Associations: Not Supported 00:30:45.171 UUID List: Not Supported 00:30:45.172 Multi-Domain Subsystem: Not Supported 00:30:45.172 Fixed Capacity Management: Not Supported 00:30:45.172 Variable Capacity Management: Not Supported 00:30:45.172 Delete Endurance Group: Not Supported 00:30:45.172 Delete NVM Set: Not Supported 00:30:45.172 Extended LBA Formats Supported: Not Supported 00:30:45.172 Flexible Data Placement Supported: Not Supported 00:30:45.172 00:30:45.172 Controller Memory Buffer Support 00:30:45.172 ================================ 00:30:45.172 Supported: No 00:30:45.172 00:30:45.172 Persistent Memory Region Support 00:30:45.172 ================================ 00:30:45.172 Supported: No 00:30:45.172 00:30:45.172 Admin Command Set Attributes 00:30:45.172 ============================ 00:30:45.172 Security Send/Receive: Not Supported 00:30:45.172 Format NVM: Not Supported 00:30:45.172 Firmware Activate/Download: Not Supported 00:30:45.172 Namespace Management: Not Supported 00:30:45.172 Device Self-Test: Not Supported 00:30:45.172 Directives: Not Supported 00:30:45.172 NVMe-MI: Not Supported 00:30:45.172 Virtualization Management: Not Supported 00:30:45.172 Doorbell Buffer Config: Not Supported 00:30:45.172 Get LBA Status Capability: Not Supported 00:30:45.172 Command & Feature Lockdown Capability: Not Supported 00:30:45.172 Abort Command Limit: 4 00:30:45.172 Async Event Request Limit: 4 00:30:45.172 Number of Firmware Slots: N/A 00:30:45.172 Firmware Slot 1 Read-Only: N/A 00:30:45.172 Firmware Activation Without Reset: N/A 00:30:45.172 Multiple Update Detection Support: N/A 00:30:45.172 Firmware Update Granularity: No Information Provided 00:30:45.172 Per-Namespace SMART Log: Yes 00:30:45.172 Asymmetric Namespace Access Log Page: Supported 00:30:45.172 ANA Transition Time : 10 sec 00:30:45.172 00:30:45.172 Asymmetric Namespace Access Capabilities 00:30:45.172 ANA Optimized State : Supported 00:30:45.172 ANA Non-Optimized State : Supported 00:30:45.172 ANA Inaccessible State : Supported 00:30:45.172 ANA Persistent Loss State : Supported 00:30:45.172 ANA Change State : Supported 00:30:45.172 ANAGRPID is not changed : No 00:30:45.172 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:30:45.172 00:30:45.172 ANA Group Identifier Maximum : 128 00:30:45.172 Number of ANA Group Identifiers : 128 00:30:45.172 Max Number of Allowed Namespaces : 1024 00:30:45.172 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:30:45.172 Command Effects Log Page: Supported 00:30:45.172 Get Log Page Extended Data: Supported 00:30:45.172 Telemetry Log Pages: Not Supported 00:30:45.172 Persistent Event Log Pages: Not Supported 00:30:45.172 Supported Log Pages Log Page: May Support 00:30:45.172 Commands Supported & Effects Log Page: Not Supported 00:30:45.172 Feature Identifiers & Effects Log Page:May Support 00:30:45.172 NVMe-MI Commands & Effects Log Page: May Support 00:30:45.172 Data Area 4 for Telemetry Log: Not Supported 00:30:45.172 Error Log Page Entries Supported: 128 00:30:45.172 Keep Alive: Supported 00:30:45.172 Keep Alive Granularity: 1000 ms 00:30:45.172 00:30:45.172 NVM Command Set Attributes 00:30:45.172 ========================== 00:30:45.172 Submission Queue Entry Size 00:30:45.172 Max: 64 00:30:45.172 Min: 64 00:30:45.172 Completion Queue Entry Size 00:30:45.172 Max: 16 00:30:45.172 Min: 16 00:30:45.172 Number of Namespaces: 1024 00:30:45.172 Compare Command: Not Supported 00:30:45.172 Write Uncorrectable Command: Not Supported 00:30:45.172 Dataset Management Command: Supported 00:30:45.172 Write Zeroes Command: Supported 00:30:45.172 Set Features Save Field: Not Supported 00:30:45.172 Reservations: Not Supported 00:30:45.172 Timestamp: Not Supported 00:30:45.172 Copy: Not Supported 00:30:45.172 Volatile Write Cache: Present 00:30:45.172 Atomic Write Unit (Normal): 1 00:30:45.172 Atomic Write Unit (PFail): 1 00:30:45.172 Atomic Compare & Write Unit: 1 00:30:45.172 Fused Compare & Write: Not Supported 00:30:45.172 Scatter-Gather List 00:30:45.172 SGL Command Set: Supported 00:30:45.172 SGL Keyed: Not Supported 00:30:45.172 SGL Bit Bucket Descriptor: Not Supported 00:30:45.172 SGL Metadata Pointer: Not Supported 00:30:45.172 Oversized SGL: Not Supported 00:30:45.172 SGL Metadata Address: Not Supported 00:30:45.172 SGL Offset: Supported 00:30:45.172 Transport SGL Data Block: Not Supported 00:30:45.172 Replay Protected Memory Block: Not Supported 00:30:45.172 00:30:45.172 Firmware Slot Information 00:30:45.172 ========================= 00:30:45.172 Active slot: 0 00:30:45.172 00:30:45.172 Asymmetric Namespace Access 00:30:45.172 =========================== 00:30:45.172 Change Count : 0 00:30:45.172 Number of ANA Group Descriptors : 1 00:30:45.172 ANA Group Descriptor : 0 00:30:45.172 ANA Group ID : 1 00:30:45.172 Number of NSID Values : 1 00:30:45.172 Change Count : 0 00:30:45.172 ANA State : 1 00:30:45.172 Namespace Identifier : 1 00:30:45.172 00:30:45.172 Commands Supported and Effects 00:30:45.172 ============================== 00:30:45.172 Admin Commands 00:30:45.172 -------------- 00:30:45.172 Get Log Page (02h): Supported 00:30:45.172 Identify (06h): Supported 00:30:45.172 Abort (08h): Supported 00:30:45.172 Set Features (09h): Supported 00:30:45.172 Get Features (0Ah): Supported 00:30:45.172 Asynchronous Event Request (0Ch): Supported 00:30:45.172 Keep Alive (18h): Supported 00:30:45.172 I/O Commands 00:30:45.172 ------------ 00:30:45.172 Flush (00h): Supported 00:30:45.172 Write (01h): Supported LBA-Change 00:30:45.172 Read (02h): Supported 00:30:45.172 Write Zeroes (08h): Supported LBA-Change 00:30:45.172 Dataset Management (09h): Supported 00:30:45.172 00:30:45.172 Error Log 00:30:45.172 ========= 00:30:45.172 Entry: 0 00:30:45.172 Error Count: 0x3 00:30:45.172 Submission Queue Id: 0x0 00:30:45.172 Command Id: 0x5 00:30:45.172 Phase Bit: 0 00:30:45.172 Status Code: 0x2 00:30:45.172 Status Code Type: 0x0 00:30:45.172 Do Not Retry: 1 00:30:45.172 Error Location: 0x28 00:30:45.172 LBA: 0x0 00:30:45.172 Namespace: 0x0 00:30:45.172 Vendor Log Page: 0x0 00:30:45.172 ----------- 00:30:45.172 Entry: 1 00:30:45.172 Error Count: 0x2 00:30:45.172 Submission Queue Id: 0x0 00:30:45.172 Command Id: 0x5 00:30:45.172 Phase Bit: 0 00:30:45.172 Status Code: 0x2 00:30:45.172 Status Code Type: 0x0 00:30:45.172 Do Not Retry: 1 00:30:45.172 Error Location: 0x28 00:30:45.172 LBA: 0x0 00:30:45.172 Namespace: 0x0 00:30:45.172 Vendor Log Page: 0x0 00:30:45.172 ----------- 00:30:45.172 Entry: 2 00:30:45.172 Error Count: 0x1 00:30:45.172 Submission Queue Id: 0x0 00:30:45.172 Command Id: 0x4 00:30:45.172 Phase Bit: 0 00:30:45.172 Status Code: 0x2 00:30:45.172 Status Code Type: 0x0 00:30:45.172 Do Not Retry: 1 00:30:45.172 Error Location: 0x28 00:30:45.172 LBA: 0x0 00:30:45.172 Namespace: 0x0 00:30:45.172 Vendor Log Page: 0x0 00:30:45.172 00:30:45.172 Number of Queues 00:30:45.172 ================ 00:30:45.172 Number of I/O Submission Queues: 128 00:30:45.173 Number of I/O Completion Queues: 128 00:30:45.173 00:30:45.173 ZNS Specific Controller Data 00:30:45.173 ============================ 00:30:45.173 Zone Append Size Limit: 0 00:30:45.173 00:30:45.173 00:30:45.173 Active Namespaces 00:30:45.173 ================= 00:30:45.173 get_feature(0x05) failed 00:30:45.173 Namespace ID:1 00:30:45.173 Command Set Identifier: NVM (00h) 00:30:45.173 Deallocate: Supported 00:30:45.173 Deallocated/Unwritten Error: Not Supported 00:30:45.173 Deallocated Read Value: Unknown 00:30:45.173 Deallocate in Write Zeroes: Not Supported 00:30:45.173 Deallocated Guard Field: 0xFFFF 00:30:45.173 Flush: Supported 00:30:45.173 Reservation: Not Supported 00:30:45.173 Namespace Sharing Capabilities: Multiple Controllers 00:30:45.173 Size (in LBAs): 1953525168 (931GiB) 00:30:45.173 Capacity (in LBAs): 1953525168 (931GiB) 00:30:45.173 Utilization (in LBAs): 1953525168 (931GiB) 00:30:45.173 UUID: 0da3d14c-2ba2-4ee6-9bdb-1f8e02c3b41c 00:30:45.173 Thin Provisioning: Not Supported 00:30:45.173 Per-NS Atomic Units: Yes 00:30:45.173 Atomic Boundary Size (Normal): 0 00:30:45.173 Atomic Boundary Size (PFail): 0 00:30:45.173 Atomic Boundary Offset: 0 00:30:45.173 NGUID/EUI64 Never Reused: No 00:30:45.173 ANA group ID: 1 00:30:45.173 Namespace Write Protected: No 00:30:45.173 Number of LBA Formats: 1 00:30:45.173 Current LBA Format: LBA Format #00 00:30:45.173 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:45.173 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.173 rmmod nvme_tcp 00:30:45.173 rmmod nvme_fabrics 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.173 18:41:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.715 18:41:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:47.715 18:41:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:30:47.715 18:41:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:47.715 18:41:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:30:47.715 18:41:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:47.715 18:41:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:47.715 18:41:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:47.715 18:41:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:47.715 18:41:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:30:47.715 18:41:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:30:47.715 18:41:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:49.095 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:49.095 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:49.095 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:49.095 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:49.095 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:49.095 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:49.095 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:49.095 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:49.095 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:49.095 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:49.095 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:49.095 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:49.095 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:49.095 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:49.095 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:49.095 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:50.035 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:30:50.295 00:30:50.295 real 0m12.080s 00:30:50.295 user 0m2.742s 00:30:50.295 sys 0m5.270s 00:30:50.295 18:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:50.295 18:41:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:50.295 ************************************ 00:30:50.295 END TEST nvmf_identify_kernel_target 00:30:50.295 ************************************ 00:30:50.295 18:41:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:50.295 18:41:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:50.295 18:41:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:50.295 18:41:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.295 ************************************ 00:30:50.295 START TEST nvmf_auth_host 00:30:50.295 ************************************ 00:30:50.295 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:50.295 * Looking for test storage... 00:30:50.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:50.295 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:50.295 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:30:50.295 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:30:50.556 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:50.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.557 --rc genhtml_branch_coverage=1 00:30:50.557 --rc genhtml_function_coverage=1 00:30:50.557 --rc genhtml_legend=1 00:30:50.557 --rc geninfo_all_blocks=1 00:30:50.557 --rc geninfo_unexecuted_blocks=1 00:30:50.557 00:30:50.557 ' 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:50.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.557 --rc genhtml_branch_coverage=1 00:30:50.557 --rc genhtml_function_coverage=1 00:30:50.557 --rc genhtml_legend=1 00:30:50.557 --rc geninfo_all_blocks=1 00:30:50.557 --rc geninfo_unexecuted_blocks=1 00:30:50.557 00:30:50.557 ' 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:50.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.557 --rc genhtml_branch_coverage=1 00:30:50.557 --rc genhtml_function_coverage=1 00:30:50.557 --rc genhtml_legend=1 00:30:50.557 --rc geninfo_all_blocks=1 00:30:50.557 --rc geninfo_unexecuted_blocks=1 00:30:50.557 00:30:50.557 ' 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:50.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.557 --rc genhtml_branch_coverage=1 00:30:50.557 --rc genhtml_function_coverage=1 00:30:50.557 --rc genhtml_legend=1 00:30:50.557 --rc geninfo_all_blocks=1 00:30:50.557 --rc geninfo_unexecuted_blocks=1 00:30:50.557 00:30:50.557 ' 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:50.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:50.557 18:41:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:53.853 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:53.853 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:53.853 Found net devices under 0000:84:00.0: cvl_0_0 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:53.853 Found net devices under 0000:84:00.1: cvl_0_1 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:53.853 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:53.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:53.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:30:53.853 00:30:53.853 --- 10.0.0.2 ping statistics --- 00:30:53.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.854 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:53.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:53.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:30:53.854 00:30:53.854 --- 10.0.0.1 ping statistics --- 00:30:53.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.854 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=1317291 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 1317291 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1317291 ']' 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:53.854 18:41:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.114 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:54.114 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:30:54.114 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:54.114 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:54.114 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.114 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=30b0de6c6c2e15ea644c83824db03dff 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.3J0 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 30b0de6c6c2e15ea644c83824db03dff 0 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 30b0de6c6c2e15ea644c83824db03dff 0 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=30b0de6c6c2e15ea644c83824db03dff 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.3J0 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.3J0 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.3J0 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=fa26f4e8f7bc19456ed7866798d948577edf5c16502932b03072897a7239857e 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.HXx 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key fa26f4e8f7bc19456ed7866798d948577edf5c16502932b03072897a7239857e 3 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 fa26f4e8f7bc19456ed7866798d948577edf5c16502932b03072897a7239857e 3 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=fa26f4e8f7bc19456ed7866798d948577edf5c16502932b03072897a7239857e 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.HXx 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.HXx 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.HXx 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=501d5baf3e4253cfe8f703812f3c9259cd48a00dc0de2f09 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Cr4 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 501d5baf3e4253cfe8f703812f3c9259cd48a00dc0de2f09 0 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 501d5baf3e4253cfe8f703812f3c9259cd48a00dc0de2f09 0 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=501d5baf3e4253cfe8f703812f3c9259cd48a00dc0de2f09 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:30:54.115 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Cr4 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Cr4 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Cr4 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6dc6d4c873744b2e31f186930aa487a80037511ffc7dc048 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.jSu 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6dc6d4c873744b2e31f186930aa487a80037511ffc7dc048 2 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6dc6d4c873744b2e31f186930aa487a80037511ffc7dc048 2 00:30:54.376 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6dc6d4c873744b2e31f186930aa487a80037511ffc7dc048 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.jSu 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.jSu 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jSu 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=cf012bc3f91020ca7a3c6ff479b881a2 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.NlM 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key cf012bc3f91020ca7a3c6ff479b881a2 1 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 cf012bc3f91020ca7a3c6ff479b881a2 1 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=cf012bc3f91020ca7a3c6ff479b881a2 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.NlM 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.NlM 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.NlM 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6182c21f90f19fba75e18eee6d976bac 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.j1b 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6182c21f90f19fba75e18eee6d976bac 1 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6182c21f90f19fba75e18eee6d976bac 1 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6182c21f90f19fba75e18eee6d976bac 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:30:54.377 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.j1b 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.j1b 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.j1b 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=f1074968f086923fe0b2783f91b4b2d4610ae7bde80cd4f4 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.TL5 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key f1074968f086923fe0b2783f91b4b2d4610ae7bde80cd4f4 2 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 f1074968f086923fe0b2783f91b4b2d4610ae7bde80cd4f4 2 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=f1074968f086923fe0b2783f91b4b2d4610ae7bde80cd4f4 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:30:54.637 18:41:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:54.637 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.TL5 00:30:54.637 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.TL5 00:30:54.637 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.TL5 00:30:54.637 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:30:54.637 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:54.637 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:54.637 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:54.637 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:30:54.637 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:30:54.638 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:54.638 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=1beb3108f7ef91935575cfe13125ad42 00:30:54.638 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:30:54.638 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.xiW 00:30:54.638 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 1beb3108f7ef91935575cfe13125ad42 0 00:30:54.638 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 1beb3108f7ef91935575cfe13125ad42 0 00:30:54.638 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:54.638 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:54.638 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=1beb3108f7ef91935575cfe13125ad42 00:30:54.638 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:30:54.638 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.xiW 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.xiW 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.xiW 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=4775ba62d832215a5356501be30becabab16463b9c2de4ddbf4c2d7572892fb2 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.PlX 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 4775ba62d832215a5356501be30becabab16463b9c2de4ddbf4c2d7572892fb2 3 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 4775ba62d832215a5356501be30becabab16463b9c2de4ddbf4c2d7572892fb2 3 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=4775ba62d832215a5356501be30becabab16463b9c2de4ddbf4c2d7572892fb2 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.PlX 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.PlX 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.PlX 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1317291 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1317291 ']' 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:54.896 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.157 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:55.157 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:30:55.157 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:55.157 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3J0 00:30:55.157 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.157 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.HXx ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HXx 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Cr4 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jSu ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jSu 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.NlM 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.j1b ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.j1b 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.TL5 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.xiW ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.xiW 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.PlX 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:55.417 18:41:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:56.799 Waiting for block devices as requested 00:30:57.060 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:30:57.060 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:57.319 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:57.319 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:57.319 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:57.580 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:57.580 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:57.580 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:57.840 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:57.840 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:58.100 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:58.100 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:58.100 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:58.100 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:58.360 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:58.360 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:58.360 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:58.929 No valid GPT data, bailing 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:30:58.929 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:59.189 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:30:59.189 00:30:59.189 Discovery Log Number of Records 2, Generation counter 2 00:30:59.189 =====Discovery Log Entry 0====== 00:30:59.190 trtype: tcp 00:30:59.190 adrfam: ipv4 00:30:59.190 subtype: current discovery subsystem 00:30:59.190 treq: not specified, sq flow control disable supported 00:30:59.190 portid: 1 00:30:59.190 trsvcid: 4420 00:30:59.190 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:59.190 traddr: 10.0.0.1 00:30:59.190 eflags: none 00:30:59.190 sectype: none 00:30:59.190 =====Discovery Log Entry 1====== 00:30:59.190 trtype: tcp 00:30:59.190 adrfam: ipv4 00:30:59.190 subtype: nvme subsystem 00:30:59.190 treq: not specified, sq flow control disable supported 00:30:59.190 portid: 1 00:30:59.190 trsvcid: 4420 00:30:59.190 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:59.190 traddr: 10.0.0.1 00:30:59.190 eflags: none 00:30:59.190 sectype: none 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:59.190 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:59.191 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:59.191 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:59.191 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.191 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.452 nvme0n1 00:30:59.452 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.452 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.453 18:41:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.714 nvme0n1 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.714 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.975 nvme0n1 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:30:59.975 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.976 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.236 nvme0n1 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:00.236 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:00.237 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:00.237 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:00.237 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.237 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.497 nvme0n1 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.497 18:41:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.757 nvme0n1 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.757 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.017 nvme0n1 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.017 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.276 nvme0n1 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.276 18:41:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.536 nvme0n1 00:31:01.536 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.536 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.536 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.536 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.536 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.536 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.795 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.056 nvme0n1 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.056 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.316 nvme0n1 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.316 18:41:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.885 nvme0n1 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:02.885 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:02.886 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:02.886 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.886 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.886 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:02.886 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.886 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:02.886 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:02.886 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:02.886 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:02.886 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.886 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.453 nvme0n1 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.453 18:41:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.019 nvme0n1 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.019 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.585 nvme0n1 00:31:04.585 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.585 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.585 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.585 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.585 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.585 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.585 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.585 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.585 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.585 18:41:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.585 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.151 nvme0n1 00:31:05.151 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.152 18:41:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.086 nvme0n1 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.086 18:41:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.021 nvme0n1 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.021 18:41:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.954 nvme0n1 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:07.954 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.955 18:41:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.329 nvme0n1 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.329 18:41:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.264 nvme0n1 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.264 18:41:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.164 nvme0n1 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.164 18:41:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.128 nvme0n1 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.128 18:41:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.033 nvme0n1 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.033 18:41:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.939 nvme0n1 00:31:17.939 18:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.939 18:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.939 18:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.939 18:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.939 18:41:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.939 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:17.940 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.940 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:17.940 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:17.940 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:17.940 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:17.940 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.940 18:41:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.319 nvme0n1 00:31:19.319 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.319 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.319 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.319 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.319 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.319 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.579 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.580 18:41:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.840 nvme0n1 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.840 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.100 nvme0n1 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.100 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.360 nvme0n1 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.360 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.620 nvme0n1 00:31:20.620 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.620 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.620 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.620 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.620 18:41:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.620 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.881 nvme0n1 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.881 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.140 nvme0n1 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.140 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.141 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.399 nvme0n1 00:31:21.399 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.659 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.659 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.659 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.659 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.659 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.659 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.659 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.659 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.659 18:41:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.659 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.919 nvme0n1 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:21.919 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:21.920 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:21.920 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.920 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.180 nvme0n1 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.180 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.438 nvme0n1 00:31:22.438 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.438 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.438 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.438 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.438 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.438 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.438 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.438 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.438 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.438 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.697 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.697 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:22.697 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:22.697 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.698 18:41:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:22.698 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:22.698 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:22.698 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:22.698 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:22.698 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:22.698 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:22.698 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:22.698 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:22.698 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:22.698 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:22.698 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:22.698 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.698 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.957 nvme0n1 00:31:22.957 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.957 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.957 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.957 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.957 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:22.957 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:23.216 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:23.217 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:23.217 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:23.217 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.217 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.476 nvme0n1 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.476 18:41:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:23.476 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.476 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:23.476 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:23.476 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:23.476 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:23.476 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.476 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.045 nvme0n1 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.045 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.613 nvme0n1 00:31:24.613 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.613 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.613 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.613 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.613 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:24.613 18:41:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.613 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.180 nvme0n1 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.180 18:41:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.115 nvme0n1 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:26.115 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.116 18:41:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.053 nvme0n1 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.053 18:41:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.988 nvme0n1 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.988 18:41:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.927 nvme0n1 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.927 18:41:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.308 nvme0n1 00:31:30.308 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.308 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:30.308 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:30.308 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.309 18:41:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.220 nvme0n1 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.220 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.221 18:42:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.599 nvme0n1 00:31:33.599 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.599 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.599 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.599 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.599 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.599 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.599 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.599 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.600 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.600 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.859 18:42:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.762 nvme0n1 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.762 18:42:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.699 nvme0n1 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.699 18:42:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.075 nvme0n1 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.075 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.335 nvme0n1 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.335 18:42:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.594 nvme0n1 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:38.594 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:38.595 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:38.595 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:38.595 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:38.595 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.908 nvme0n1 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.908 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.169 nvme0n1 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.169 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.428 nvme0n1 00:31:39.428 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.428 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.428 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.428 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.428 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.428 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.428 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.428 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.428 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.429 18:42:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.687 nvme0n1 00:31:39.687 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.687 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.687 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.687 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.687 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.687 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.687 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.688 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.946 nvme0n1 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:39.946 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:39.947 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:39.947 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:39.947 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.947 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.205 nvme0n1 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.205 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.465 nvme0n1 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.465 18:42:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.726 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.987 nvme0n1 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:40.987 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.988 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.555 nvme0n1 00:31:41.555 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.555 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.555 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.555 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.555 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.555 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.555 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.555 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.555 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.555 18:42:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.555 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.555 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.555 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:41.555 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.555 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:41.555 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:41.555 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:41.555 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:41.555 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:41.555 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:41.555 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:41.555 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.556 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.125 nvme0n1 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.125 18:42:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.690 nvme0n1 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:42.690 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:42.691 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.691 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.691 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:42.691 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.691 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:42.691 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:42.691 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:42.691 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:42.691 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.691 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.259 nvme0n1 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:43.259 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:43.260 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:43.260 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.260 18:42:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.829 nvme0n1 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.829 18:42:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.768 nvme0n1 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.768 18:42:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.705 nvme0n1 00:31:45.705 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.705 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.705 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.705 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.705 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.965 18:42:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.903 nvme0n1 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.903 18:42:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.837 nvme0n1 00:31:47.837 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.837 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.837 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.837 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.837 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.837 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.837 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.838 18:42:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.776 nvme0n1 00:31:48.776 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.776 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.776 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBiMGRlNmM2YzJlMTVlYTY0NGM4MzgyNGRiMDNkZmZ8l8r4: 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: ]] 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEyNmY0ZThmN2JjMTk0NTZlZDc4NjY3OThkOTQ4NTc3ZWRmNWMxNjUwMjkzMmIwMzA3Mjg5N2E3MjM5ODU3ZQb9abA=: 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.777 18:42:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.686 nvme0n1 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.686 18:42:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.062 nvme0n1 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.062 18:42:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.972 nvme0n1 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEwNzQ5NjhmMDg2OTIzZmUwYjI3ODNmOTFiNGIyZDQ2MTBhZTdiZGU4MGNkNGY0jv9JLg==: 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: ]] 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWJlYjMxMDhmN2VmOTE5MzU1NzVjZmUxMzEyNWFkNDI5qb6Z: 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.972 18:42:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.876 nvme0n1 00:31:55.876 18:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.876 18:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.876 18:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.876 18:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.876 18:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.876 18:42:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDc3NWJhNjJkODMyMjE1YTUzNTY1MDFiZTMwYmVjYWJhYjE2NDYzYjljMmRlNGRkYmY0YzJkNzU3Mjg5MmZiMuULAeA=: 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.876 18:42:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.813 nvme0n1 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.813 request: 00:31:56.813 { 00:31:56.813 "name": "nvme0", 00:31:56.813 "trtype": "tcp", 00:31:56.813 "traddr": "10.0.0.1", 00:31:56.813 "adrfam": "ipv4", 00:31:56.813 "trsvcid": "4420", 00:31:56.813 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:56.813 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:56.813 "prchk_reftag": false, 00:31:56.813 "prchk_guard": false, 00:31:56.813 "hdgst": false, 00:31:56.813 "ddgst": false, 00:31:56.813 "allow_unrecognized_csi": false, 00:31:56.813 "method": "bdev_nvme_attach_controller", 00:31:56.813 "req_id": 1 00:31:56.813 } 00:31:56.813 Got JSON-RPC error response 00:31:56.813 response: 00:31:56.813 { 00:31:56.813 "code": -5, 00:31:56.813 "message": "Input/output error" 00:31:56.813 } 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:56.813 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.073 request: 00:31:57.073 { 00:31:57.073 "name": "nvme0", 00:31:57.073 "trtype": "tcp", 00:31:57.073 "traddr": "10.0.0.1", 00:31:57.073 "adrfam": "ipv4", 00:31:57.073 "trsvcid": "4420", 00:31:57.073 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:57.073 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:57.073 "prchk_reftag": false, 00:31:57.073 "prchk_guard": false, 00:31:57.073 "hdgst": false, 00:31:57.073 "ddgst": false, 00:31:57.073 "dhchap_key": "key2", 00:31:57.073 "allow_unrecognized_csi": false, 00:31:57.073 "method": "bdev_nvme_attach_controller", 00:31:57.073 "req_id": 1 00:31:57.073 } 00:31:57.073 Got JSON-RPC error response 00:31:57.073 response: 00:31:57.073 { 00:31:57.073 "code": -5, 00:31:57.073 "message": "Input/output error" 00:31:57.073 } 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.073 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.333 request: 00:31:57.333 { 00:31:57.333 "name": "nvme0", 00:31:57.333 "trtype": "tcp", 00:31:57.333 "traddr": "10.0.0.1", 00:31:57.333 "adrfam": "ipv4", 00:31:57.333 "trsvcid": "4420", 00:31:57.333 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:57.333 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:57.333 "prchk_reftag": false, 00:31:57.333 "prchk_guard": false, 00:31:57.333 "hdgst": false, 00:31:57.333 "ddgst": false, 00:31:57.333 "dhchap_key": "key1", 00:31:57.333 "dhchap_ctrlr_key": "ckey2", 00:31:57.333 "allow_unrecognized_csi": false, 00:31:57.333 "method": "bdev_nvme_attach_controller", 00:31:57.333 "req_id": 1 00:31:57.333 } 00:31:57.333 Got JSON-RPC error response 00:31:57.333 response: 00:31:57.333 { 00:31:57.333 "code": -5, 00:31:57.333 "message": "Input/output error" 00:31:57.333 } 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.333 nvme0n1 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.333 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.593 18:42:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.593 request: 00:31:57.593 { 00:31:57.593 "name": "nvme0", 00:31:57.593 "dhchap_key": "key1", 00:31:57.593 "dhchap_ctrlr_key": "ckey2", 00:31:57.593 "method": "bdev_nvme_set_keys", 00:31:57.593 "req_id": 1 00:31:57.593 } 00:31:57.593 Got JSON-RPC error response 00:31:57.593 response: 00:31:57.593 { 00:31:57.593 "code": -13, 00:31:57.593 "message": "Permission denied" 00:31:57.593 } 00:31:57.593 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:57.593 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:31:57.593 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:57.593 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:57.593 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:57.593 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.593 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.593 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.593 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:57.593 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.852 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:57.852 18:42:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:58.787 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.787 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:58.787 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.787 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.787 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.787 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:58.787 18:42:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:59.724 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:59.724 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.724 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.724 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.724 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTAxZDViYWYzZTQyNTNjZmU4ZjcwMzgxMmYzYzkyNTljZDQ4YTAwZGMwZGUyZjA5dM0fcQ==: 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: ]] 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmRjNmQ0Yzg3Mzc0NGIyZTMxZjE4NjkzMGFhNDg3YTgwMDM3NTExZmZjN2RjMDQ49rPzwQ==: 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.983 nvme0n1 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2YwMTJiYzNmOTEwMjBjYTdhM2M2ZmY0NzliODgxYTLVHRDB: 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: ]] 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjE4MmMyMWY5MGYxOWZiYTc1ZTE4ZWVlNmQ5NzZiYWOVTPQi: 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.983 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.242 request: 00:32:00.242 { 00:32:00.242 "name": "nvme0", 00:32:00.242 "dhchap_key": "key2", 00:32:00.242 "dhchap_ctrlr_key": "ckey1", 00:32:00.242 "method": "bdev_nvme_set_keys", 00:32:00.242 "req_id": 1 00:32:00.242 } 00:32:00.242 Got JSON-RPC error response 00:32:00.242 response: 00:32:00.242 { 00:32:00.242 "code": -13, 00:32:00.242 "message": "Permission denied" 00:32:00.242 } 00:32:00.242 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:00.242 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:00.242 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:00.242 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:00.242 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:00.242 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:00.242 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.242 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.242 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.242 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.242 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:00.242 18:42:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:32:01.178 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:01.179 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:01.179 rmmod nvme_tcp 00:32:01.179 rmmod nvme_fabrics 00:32:01.179 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:01.179 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:32:01.179 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:32:01.179 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 1317291 ']' 00:32:01.179 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 1317291 00:32:01.179 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1317291 ']' 00:32:01.179 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1317291 00:32:01.436 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:32:01.436 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:01.436 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1317291 00:32:01.436 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:01.436 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:01.436 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1317291' 00:32:01.436 killing process with pid 1317291 00:32:01.436 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1317291 00:32:01.436 18:42:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1317291 00:32:02.005 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:02.005 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:02.005 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:02.005 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:32:02.005 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:32:02.005 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:02.005 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:32:02.005 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:02.005 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:02.005 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.005 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.005 18:42:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.912 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:03.912 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:03.912 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:03.912 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:03.912 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:03.912 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:32:03.912 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:03.912 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:03.912 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:03.912 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:03.912 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:32:03.912 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:32:03.912 18:42:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:05.816 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:05.816 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:05.816 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:05.816 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:05.816 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:05.816 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:05.816 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:05.816 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:05.816 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:05.817 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:05.817 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:05.817 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:05.817 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:05.817 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:05.817 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:05.817 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:06.386 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:32:06.645 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.3J0 /tmp/spdk.key-null.Cr4 /tmp/spdk.key-sha256.NlM /tmp/spdk.key-sha384.TL5 /tmp/spdk.key-sha512.PlX /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:06.645 18:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:08.585 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:08.585 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:08.585 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:08.585 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:08.585 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:08.585 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:08.585 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:08.585 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:08.585 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:08.585 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:08.585 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:08.585 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:08.585 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:08.585 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:08.585 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:08.585 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:08.585 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:08.585 00:32:08.585 real 1m18.263s 00:32:08.585 user 1m16.567s 00:32:08.585 sys 0m9.010s 00:32:08.585 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:08.585 18:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.585 ************************************ 00:32:08.585 END TEST nvmf_auth_host 00:32:08.585 ************************************ 00:32:08.585 18:42:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:08.585 18:42:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:08.585 18:42:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:08.585 18:42:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:08.585 18:42:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.585 ************************************ 00:32:08.585 START TEST nvmf_digest 00:32:08.585 ************************************ 00:32:08.585 18:42:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:08.585 * Looking for test storage... 00:32:08.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:08.585 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:08.585 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:32:08.585 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:08.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.844 --rc genhtml_branch_coverage=1 00:32:08.844 --rc genhtml_function_coverage=1 00:32:08.844 --rc genhtml_legend=1 00:32:08.844 --rc geninfo_all_blocks=1 00:32:08.844 --rc geninfo_unexecuted_blocks=1 00:32:08.844 00:32:08.844 ' 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:08.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.844 --rc genhtml_branch_coverage=1 00:32:08.844 --rc genhtml_function_coverage=1 00:32:08.844 --rc genhtml_legend=1 00:32:08.844 --rc geninfo_all_blocks=1 00:32:08.844 --rc geninfo_unexecuted_blocks=1 00:32:08.844 00:32:08.844 ' 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:08.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.844 --rc genhtml_branch_coverage=1 00:32:08.844 --rc genhtml_function_coverage=1 00:32:08.844 --rc genhtml_legend=1 00:32:08.844 --rc geninfo_all_blocks=1 00:32:08.844 --rc geninfo_unexecuted_blocks=1 00:32:08.844 00:32:08.844 ' 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:08.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.844 --rc genhtml_branch_coverage=1 00:32:08.844 --rc genhtml_function_coverage=1 00:32:08.844 --rc genhtml_legend=1 00:32:08.844 --rc geninfo_all_blocks=1 00:32:08.844 --rc geninfo_unexecuted_blocks=1 00:32:08.844 00:32:08.844 ' 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:08.844 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:08.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:32:08.845 18:42:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:32:12.132 Found 0000:84:00.0 (0x8086 - 0x159b) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:32:12.132 Found 0000:84:00.1 (0x8086 - 0x159b) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:32:12.132 Found net devices under 0000:84:00.0: cvl_0_0 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:32:12.132 Found net devices under 0000:84:00.1: cvl_0_1 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:12.132 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:12.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:32:12.132 00:32:12.133 --- 10.0.0.2 ping statistics --- 00:32:12.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.133 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:32:12.133 00:32:12.133 --- 10.0.0.1 ping statistics --- 00:32:12.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.133 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:12.133 ************************************ 00:32:12.133 START TEST nvmf_digest_clean 00:32:12.133 ************************************ 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=1330360 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 1330360 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1330360 ']' 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:12.133 18:42:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:12.133 [2024-10-08 18:42:40.448446] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:32:12.133 [2024-10-08 18:42:40.448622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.133 [2024-10-08 18:42:40.612904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.393 [2024-10-08 18:42:40.838284] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.393 [2024-10-08 18:42:40.838389] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.393 [2024-10-08 18:42:40.838427] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:12.393 [2024-10-08 18:42:40.838461] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:12.393 [2024-10-08 18:42:40.838489] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.393 [2024-10-08 18:42:40.839865] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.651 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:12.651 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:12.651 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:12.651 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:12.651 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:12.651 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.651 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:12.651 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:12.651 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:12.651 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.651 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:12.910 null0 00:32:12.910 [2024-10-08 18:42:41.254630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:12.910 [2024-10-08 18:42:41.278998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1330509 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1330509 /var/tmp/bperf.sock 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1330509 ']' 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:12.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:12.910 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:12.910 [2024-10-08 18:42:41.335264] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:32:12.911 [2024-10-08 18:42:41.335341] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330509 ] 00:32:12.911 [2024-10-08 18:42:41.439707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.170 [2024-10-08 18:42:41.663303] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.739 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:13.739 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:13.739 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:13.739 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:13.739 18:42:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:13.999 18:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:13.999 18:42:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:14.938 nvme0n1 00:32:14.938 18:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:14.938 18:42:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:15.198 Running I/O for 2 seconds... 00:32:17.072 7752.00 IOPS, 30.28 MiB/s [2024-10-08T16:42:45.609Z] 7495.50 IOPS, 29.28 MiB/s 00:32:17.072 Latency(us) 00:32:17.072 [2024-10-08T16:42:45.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.072 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:17.072 nvme0n1 : 2.02 7500.43 29.30 0.00 0.00 17029.47 6359.42 35535.08 00:32:17.072 [2024-10-08T16:42:45.609Z] =================================================================================================================== 00:32:17.072 [2024-10-08T16:42:45.609Z] Total : 7500.43 29.30 0.00 0.00 17029.47 6359.42 35535.08 00:32:17.072 { 00:32:17.072 "results": [ 00:32:17.072 { 00:32:17.072 "job": "nvme0n1", 00:32:17.072 "core_mask": "0x2", 00:32:17.072 "workload": "randread", 00:32:17.072 "status": "finished", 00:32:17.072 "queue_depth": 128, 00:32:17.072 "io_size": 4096, 00:32:17.072 "runtime": 2.015751, 00:32:17.072 "iops": 7500.430360694351, 00:32:17.072 "mibps": 29.29855609646231, 00:32:17.072 "io_failed": 0, 00:32:17.072 "io_timeout": 0, 00:32:17.072 "avg_latency_us": 17029.468906477745, 00:32:17.072 "min_latency_us": 6359.419259259259, 00:32:17.072 "max_latency_us": 35535.07555555556 00:32:17.072 } 00:32:17.072 ], 00:32:17.072 "core_count": 1 00:32:17.072 } 00:32:17.072 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:17.072 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:17.072 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:17.072 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:17.072 | select(.opcode=="crc32c") 00:32:17.072 | "\(.module_name) \(.executed)"' 00:32:17.072 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1330509 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1330509 ']' 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1330509 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1330509 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1330509' 00:32:17.639 killing process with pid 1330509 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1330509 00:32:17.639 Received shutdown signal, test time was about 2.000000 seconds 00:32:17.639 00:32:17.639 Latency(us) 00:32:17.639 [2024-10-08T16:42:46.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.639 [2024-10-08T16:42:46.176Z] =================================================================================================================== 00:32:17.639 [2024-10-08T16:42:46.176Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:17.639 18:42:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1330509 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1331043 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1331043 /var/tmp/bperf.sock 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1331043 ']' 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:17.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:17.898 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:17.898 [2024-10-08 18:42:46.405065] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:32:17.898 [2024-10-08 18:42:46.405164] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331043 ] 00:32:17.898 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:17.898 Zero copy mechanism will not be used. 00:32:18.157 [2024-10-08 18:42:46.500582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.415 [2024-10-08 18:42:46.702771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.674 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:18.674 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:18.674 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:18.674 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:18.674 18:42:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:18.933 18:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:18.933 18:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:19.502 nvme0n1 00:32:19.502 18:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:19.502 18:42:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:19.502 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:19.502 Zero copy mechanism will not be used. 00:32:19.502 Running I/O for 2 seconds... 00:32:21.822 2681.00 IOPS, 335.12 MiB/s [2024-10-08T16:42:50.359Z] 2678.00 IOPS, 334.75 MiB/s 00:32:21.822 Latency(us) 00:32:21.822 [2024-10-08T16:42:50.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.822 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:21.822 nvme0n1 : 2.01 2678.96 334.87 0.00 0.00 5962.45 1881.13 8398.32 00:32:21.822 [2024-10-08T16:42:50.359Z] =================================================================================================================== 00:32:21.822 [2024-10-08T16:42:50.359Z] Total : 2678.96 334.87 0.00 0.00 5962.45 1881.13 8398.32 00:32:21.822 { 00:32:21.823 "results": [ 00:32:21.823 { 00:32:21.823 "job": "nvme0n1", 00:32:21.823 "core_mask": "0x2", 00:32:21.823 "workload": "randread", 00:32:21.823 "status": "finished", 00:32:21.823 "queue_depth": 16, 00:32:21.823 "io_size": 131072, 00:32:21.823 "runtime": 2.005626, 00:32:21.823 "iops": 2678.96407405967, 00:32:21.823 "mibps": 334.87050925745876, 00:32:21.823 "io_failed": 0, 00:32:21.823 "io_timeout": 0, 00:32:21.823 "avg_latency_us": 5962.454175955222, 00:32:21.823 "min_latency_us": 1881.125925925926, 00:32:21.823 "max_latency_us": 8398.317037037037 00:32:21.823 } 00:32:21.823 ], 00:32:21.823 "core_count": 1 00:32:21.823 } 00:32:21.823 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:21.823 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:21.823 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:21.823 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:21.823 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:21.823 | select(.opcode=="crc32c") 00:32:21.823 | "\(.module_name) \(.executed)"' 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1331043 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1331043 ']' 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1331043 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1331043 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1331043' 00:32:22.081 killing process with pid 1331043 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1331043 00:32:22.081 Received shutdown signal, test time was about 2.000000 seconds 00:32:22.081 00:32:22.081 Latency(us) 00:32:22.081 [2024-10-08T16:42:50.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.081 [2024-10-08T16:42:50.618Z] =================================================================================================================== 00:32:22.081 [2024-10-08T16:42:50.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:22.081 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1331043 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1331571 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1331571 /var/tmp/bperf.sock 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1331571 ']' 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:22.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:22.339 18:42:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:22.600 [2024-10-08 18:42:50.907205] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:32:22.600 [2024-10-08 18:42:50.907312] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331571 ] 00:32:22.600 [2024-10-08 18:42:51.014606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.860 [2024-10-08 18:42:51.234810] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.817 18:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:23.817 18:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:23.817 18:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:23.817 18:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:23.817 18:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:24.076 18:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:24.076 18:42:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:24.645 nvme0n1 00:32:24.645 18:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:24.645 18:42:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:24.905 Running I/O for 2 seconds... 00:32:26.780 8579.00 IOPS, 33.51 MiB/s [2024-10-08T16:42:55.317Z] 8589.50 IOPS, 33.55 MiB/s 00:32:26.780 Latency(us) 00:32:26.780 [2024-10-08T16:42:55.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.780 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.780 nvme0n1 : 2.01 8599.48 33.59 0.00 0.00 14859.22 6310.87 22427.88 00:32:26.780 [2024-10-08T16:42:55.317Z] =================================================================================================================== 00:32:26.780 [2024-10-08T16:42:55.317Z] Total : 8599.48 33.59 0.00 0.00 14859.22 6310.87 22427.88 00:32:26.780 { 00:32:26.780 "results": [ 00:32:26.780 { 00:32:26.781 "job": "nvme0n1", 00:32:26.781 "core_mask": "0x2", 00:32:26.781 "workload": "randwrite", 00:32:26.781 "status": "finished", 00:32:26.781 "queue_depth": 128, 00:32:26.781 "io_size": 4096, 00:32:26.781 "runtime": 2.012563, 00:32:26.781 "iops": 8599.482351608372, 00:32:26.781 "mibps": 33.591727935970205, 00:32:26.781 "io_failed": 0, 00:32:26.781 "io_timeout": 0, 00:32:26.781 "avg_latency_us": 14859.216405778863, 00:32:26.781 "min_latency_us": 6310.874074074074, 00:32:26.781 "max_latency_us": 22427.875555555554 00:32:26.781 } 00:32:26.781 ], 00:32:26.781 "core_count": 1 00:32:26.781 } 00:32:26.781 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:26.781 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:26.781 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:26.781 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:26.781 | select(.opcode=="crc32c") 00:32:26.781 | "\(.module_name) \(.executed)"' 00:32:26.781 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1331571 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1331571 ']' 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1331571 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1331571 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1331571' 00:32:27.349 killing process with pid 1331571 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1331571 00:32:27.349 Received shutdown signal, test time was about 2.000000 seconds 00:32:27.349 00:32:27.349 Latency(us) 00:32:27.349 [2024-10-08T16:42:55.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.349 [2024-10-08T16:42:55.886Z] =================================================================================================================== 00:32:27.349 [2024-10-08T16:42:55.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:27.349 18:42:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1331571 00:32:27.608 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:27.608 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:27.608 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:27.608 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:27.608 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:27.608 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:27.608 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:27.608 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1332112 00:32:27.608 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1332112 /var/tmp/bperf.sock 00:32:27.608 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:27.608 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1332112 ']' 00:32:27.608 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:27.608 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:27.609 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:27.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:27.609 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:27.609 18:42:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:27.609 [2024-10-08 18:42:56.132018] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:32:27.609 [2024-10-08 18:42:56.132120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332112 ] 00:32:27.609 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:27.609 Zero copy mechanism will not be used. 00:32:27.868 [2024-10-08 18:42:56.237777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.127 [2024-10-08 18:42:56.449554] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.065 18:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:29.066 18:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:32:29.066 18:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:29.066 18:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:29.066 18:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:29.324 18:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:29.324 18:42:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:30.261 nvme0n1 00:32:30.261 18:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:30.261 18:42:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:30.261 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:30.261 Zero copy mechanism will not be used. 00:32:30.261 Running I/O for 2 seconds... 00:32:32.137 2384.00 IOPS, 298.00 MiB/s [2024-10-08T16:43:00.674Z] 2478.50 IOPS, 309.81 MiB/s 00:32:32.137 Latency(us) 00:32:32.137 [2024-10-08T16:43:00.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.137 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:32.137 nvme0n1 : 2.01 2475.70 309.46 0.00 0.00 6442.42 3106.89 10485.76 00:32:32.137 [2024-10-08T16:43:00.674Z] =================================================================================================================== 00:32:32.137 [2024-10-08T16:43:00.674Z] Total : 2475.70 309.46 0.00 0.00 6442.42 3106.89 10485.76 00:32:32.137 { 00:32:32.137 "results": [ 00:32:32.137 { 00:32:32.137 "job": "nvme0n1", 00:32:32.137 "core_mask": "0x2", 00:32:32.137 "workload": "randwrite", 00:32:32.137 "status": "finished", 00:32:32.137 "queue_depth": 16, 00:32:32.137 "io_size": 131072, 00:32:32.137 "runtime": 2.008722, 00:32:32.137 "iops": 2475.7034572230505, 00:32:32.137 "mibps": 309.4629321528813, 00:32:32.137 "io_failed": 0, 00:32:32.137 "io_timeout": 0, 00:32:32.137 "avg_latency_us": 6442.416102360152, 00:32:32.137 "min_latency_us": 3106.8918518518517, 00:32:32.137 "max_latency_us": 10485.76 00:32:32.137 } 00:32:32.137 ], 00:32:32.137 "core_count": 1 00:32:32.137 } 00:32:32.137 18:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:32.137 18:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:32.137 18:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:32.137 18:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:32.137 18:43:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:32.137 | select(.opcode=="crc32c") 00:32:32.137 | "\(.module_name) \(.executed)"' 00:32:32.707 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:32.707 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:32.707 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:32.707 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:32.707 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1332112 00:32:32.707 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1332112 ']' 00:32:32.707 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1332112 00:32:32.708 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:32.708 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:32.708 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1332112 00:32:32.708 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:32.708 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:32.708 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1332112' 00:32:32.708 killing process with pid 1332112 00:32:32.708 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1332112 00:32:32.708 Received shutdown signal, test time was about 2.000000 seconds 00:32:32.708 00:32:32.708 Latency(us) 00:32:32.708 [2024-10-08T16:43:01.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.708 [2024-10-08T16:43:01.245Z] =================================================================================================================== 00:32:32.708 [2024-10-08T16:43:01.245Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:32.708 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1332112 00:32:33.276 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1330360 00:32:33.276 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1330360 ']' 00:32:33.276 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1330360 00:32:33.276 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:32:33.276 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:33.276 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1330360 00:32:33.276 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:33.276 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:33.276 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1330360' 00:32:33.276 killing process with pid 1330360 00:32:33.276 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1330360 00:32:33.276 18:43:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1330360 00:32:33.845 00:32:33.845 real 0m21.759s 00:32:33.845 user 0m45.350s 00:32:33.845 sys 0m5.628s 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:33.845 ************************************ 00:32:33.845 END TEST nvmf_digest_clean 00:32:33.845 ************************************ 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:33.845 ************************************ 00:32:33.845 START TEST nvmf_digest_error 00:32:33.845 ************************************ 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=1332923 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 1332923 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1332923 ']' 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:33.845 18:43:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:33.845 [2024-10-08 18:43:02.207051] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:32:33.845 [2024-10-08 18:43:02.207144] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.845 [2024-10-08 18:43:02.368903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.415 [2024-10-08 18:43:02.656390] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.415 [2024-10-08 18:43:02.656521] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.415 [2024-10-08 18:43:02.656586] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.415 [2024-10-08 18:43:02.656647] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.415 [2024-10-08 18:43:02.656720] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.415 [2024-10-08 18:43:02.658398] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:35.001 [2024-10-08 18:43:03.273270] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:35.001 null0 00:32:35.001 [2024-10-08 18:43:03.473793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:35.001 [2024-10-08 18:43:03.498148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1333080 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1333080 /var/tmp/bperf.sock 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1333080 ']' 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:35.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:35.001 18:43:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:35.260 [2024-10-08 18:43:03.563521] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:32:35.260 [2024-10-08 18:43:03.563627] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333080 ] 00:32:35.260 [2024-10-08 18:43:03.679179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.518 [2024-10-08 18:43:03.910351] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.777 18:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:35.777 18:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:35.778 18:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:35.778 18:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:36.035 18:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:36.035 18:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.035 18:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:36.293 18:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.293 18:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:36.293 18:43:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:36.551 nvme0n1 00:32:36.810 18:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:36.810 18:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.810 18:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:36.810 18:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.810 18:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:36.810 18:43:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:37.068 Running I/O for 2 seconds... 00:32:37.068 [2024-10-08 18:43:05.397764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.068 [2024-10-08 18:43:05.397869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.068 [2024-10-08 18:43:05.397918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.068 [2024-10-08 18:43:05.436006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.068 [2024-10-08 18:43:05.436117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.068 [2024-10-08 18:43:05.436163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.069 [2024-10-08 18:43:05.474342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.069 [2024-10-08 18:43:05.474422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.069 [2024-10-08 18:43:05.474466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.069 [2024-10-08 18:43:05.515270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.069 [2024-10-08 18:43:05.515350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.069 [2024-10-08 18:43:05.515393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.069 [2024-10-08 18:43:05.550948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.069 [2024-10-08 18:43:05.551003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.069 [2024-10-08 18:43:05.551027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.069 [2024-10-08 18:43:05.588709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.069 [2024-10-08 18:43:05.588787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.069 [2024-10-08 18:43:05.588832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.328 [2024-10-08 18:43:05.628916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.328 [2024-10-08 18:43:05.628997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.328 [2024-10-08 18:43:05.629040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.328 [2024-10-08 18:43:05.668781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.328 [2024-10-08 18:43:05.668860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.328 [2024-10-08 18:43:05.668904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.328 [2024-10-08 18:43:05.708007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.328 [2024-10-08 18:43:05.708090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.328 [2024-10-08 18:43:05.708133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.328 [2024-10-08 18:43:05.739243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.328 [2024-10-08 18:43:05.739321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.328 [2024-10-08 18:43:05.739365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.328 [2024-10-08 18:43:05.768862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.328 [2024-10-08 18:43:05.768941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.328 [2024-10-08 18:43:05.768985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.328 [2024-10-08 18:43:05.802797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.328 [2024-10-08 18:43:05.802874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.328 [2024-10-08 18:43:05.802917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.328 [2024-10-08 18:43:05.843464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.328 [2024-10-08 18:43:05.843541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.328 [2024-10-08 18:43:05.843585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.588 [2024-10-08 18:43:05.884237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.588 [2024-10-08 18:43:05.884316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.588 [2024-10-08 18:43:05.884359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.588 [2024-10-08 18:43:05.923358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.588 [2024-10-08 18:43:05.923446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.588 [2024-10-08 18:43:05.923489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.588 [2024-10-08 18:43:05.959762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.588 [2024-10-08 18:43:05.959838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.588 [2024-10-08 18:43:05.959880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.588 [2024-10-08 18:43:05.987911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.588 [2024-10-08 18:43:05.987990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.588 [2024-10-08 18:43:05.988033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.588 [2024-10-08 18:43:06.026721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.588 [2024-10-08 18:43:06.026800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.588 [2024-10-08 18:43:06.026842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.588 [2024-10-08 18:43:06.081830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.588 [2024-10-08 18:43:06.081908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.588 [2024-10-08 18:43:06.081966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.588 [2024-10-08 18:43:06.112500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.588 [2024-10-08 18:43:06.112579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.588 [2024-10-08 18:43:06.112622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.847 [2024-10-08 18:43:06.144086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.847 [2024-10-08 18:43:06.144163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.847 [2024-10-08 18:43:06.144205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.847 [2024-10-08 18:43:06.181767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.847 [2024-10-08 18:43:06.181844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.847 [2024-10-08 18:43:06.181887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.847 [2024-10-08 18:43:06.209247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.847 [2024-10-08 18:43:06.209323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.847 [2024-10-08 18:43:06.209366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.847 [2024-10-08 18:43:06.258893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.847 [2024-10-08 18:43:06.258971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.847 [2024-10-08 18:43:06.259013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.847 [2024-10-08 18:43:06.298222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.847 [2024-10-08 18:43:06.298298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.847 [2024-10-08 18:43:06.298342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.847 [2024-10-08 18:43:06.325947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.847 [2024-10-08 18:43:06.326024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.847 [2024-10-08 18:43:06.326066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.847 [2024-10-08 18:43:06.348221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:37.847 [2024-10-08 18:43:06.348299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.847 [2024-10-08 18:43:06.348343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.105 6911.00 IOPS, 27.00 MiB/s [2024-10-08T16:43:06.642Z] [2024-10-08 18:43:06.386258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.105 [2024-10-08 18:43:06.386338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-10-08 18:43:06.386381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.105 [2024-10-08 18:43:06.416819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.105 [2024-10-08 18:43:06.416896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-10-08 18:43:06.416940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.105 [2024-10-08 18:43:06.454361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.105 [2024-10-08 18:43:06.454438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-10-08 18:43:06.454480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.105 [2024-10-08 18:43:06.487045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.105 [2024-10-08 18:43:06.487122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-10-08 18:43:06.487164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.105 [2024-10-08 18:43:06.518799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.105 [2024-10-08 18:43:06.518841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-10-08 18:43:06.518865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.105 [2024-10-08 18:43:06.549706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.105 [2024-10-08 18:43:06.549783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-10-08 18:43:06.549825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.105 [2024-10-08 18:43:06.580532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.105 [2024-10-08 18:43:06.580609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-10-08 18:43:06.580672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.105 [2024-10-08 18:43:06.619738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.105 [2024-10-08 18:43:06.619781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.105 [2024-10-08 18:43:06.619804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.363 [2024-10-08 18:43:06.656840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.363 [2024-10-08 18:43:06.656919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.363 [2024-10-08 18:43:06.656977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.363 [2024-10-08 18:43:06.692394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.363 [2024-10-08 18:43:06.692474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.363 [2024-10-08 18:43:06.692528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.363 [2024-10-08 18:43:06.712099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.363 [2024-10-08 18:43:06.712177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.363 [2024-10-08 18:43:06.712220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.363 [2024-10-08 18:43:06.741965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.363 [2024-10-08 18:43:06.742049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.363 [2024-10-08 18:43:06.742092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.363 [2024-10-08 18:43:06.765691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.363 [2024-10-08 18:43:06.765743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.363 [2024-10-08 18:43:06.765763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.363 [2024-10-08 18:43:06.791828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.363 [2024-10-08 18:43:06.791871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.363 [2024-10-08 18:43:06.791896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.363 [2024-10-08 18:43:06.827746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.363 [2024-10-08 18:43:06.827823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.363 [2024-10-08 18:43:06.827865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.363 [2024-10-08 18:43:06.866522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.363 [2024-10-08 18:43:06.866599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.363 [2024-10-08 18:43:06.866641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.621 [2024-10-08 18:43:06.901749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.621 [2024-10-08 18:43:06.901793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.621 [2024-10-08 18:43:06.901817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.621 [2024-10-08 18:43:06.937388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.621 [2024-10-08 18:43:06.937486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.621 [2024-10-08 18:43:06.937531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.621 [2024-10-08 18:43:06.976632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.621 [2024-10-08 18:43:06.976730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.621 [2024-10-08 18:43:06.976774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.621 [2024-10-08 18:43:07.012516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.621 [2024-10-08 18:43:07.012595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.621 [2024-10-08 18:43:07.012638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.621 [2024-10-08 18:43:07.044442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.621 [2024-10-08 18:43:07.044522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.621 [2024-10-08 18:43:07.044565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.621 [2024-10-08 18:43:07.082545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.622 [2024-10-08 18:43:07.082624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.622 [2024-10-08 18:43:07.082682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.622 [2024-10-08 18:43:07.112750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.622 [2024-10-08 18:43:07.112830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.622 [2024-10-08 18:43:07.112873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.622 [2024-10-08 18:43:07.147971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.622 [2024-10-08 18:43:07.148052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.622 [2024-10-08 18:43:07.148096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.880 [2024-10-08 18:43:07.187601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.880 [2024-10-08 18:43:07.187704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.880 [2024-10-08 18:43:07.187730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.880 [2024-10-08 18:43:07.224357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.880 [2024-10-08 18:43:07.224435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.880 [2024-10-08 18:43:07.224479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.880 [2024-10-08 18:43:07.260350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.880 [2024-10-08 18:43:07.260432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.880 [2024-10-08 18:43:07.260477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.880 [2024-10-08 18:43:07.293931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.880 [2024-10-08 18:43:07.294009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.880 [2024-10-08 18:43:07.294052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.880 [2024-10-08 18:43:07.328133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.880 [2024-10-08 18:43:07.328212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.880 [2024-10-08 18:43:07.328257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.880 7266.50 IOPS, 28.38 MiB/s [2024-10-08T16:43:07.417Z] [2024-10-08 18:43:07.362308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159bdf0) 00:32:38.880 [2024-10-08 18:43:07.362383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.880 [2024-10-08 18:43:07.362425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.880 00:32:38.880 Latency(us) 00:32:38.880 [2024-10-08T16:43:07.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.880 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:38.880 nvme0n1 : 2.01 7293.62 28.49 0.00 0.00 17513.41 5024.43 58642.58 00:32:38.880 [2024-10-08T16:43:07.417Z] =================================================================================================================== 00:32:38.880 [2024-10-08T16:43:07.417Z] Total : 7293.62 28.49 0.00 0.00 17513.41 5024.43 58642.58 00:32:38.880 { 00:32:38.880 "results": [ 00:32:38.880 { 00:32:38.880 "job": "nvme0n1", 00:32:38.880 "core_mask": "0x2", 00:32:38.880 "workload": "randread", 00:32:38.880 "status": "finished", 00:32:38.880 "queue_depth": 128, 00:32:38.880 "io_size": 4096, 00:32:38.880 "runtime": 2.010114, 00:32:38.880 "iops": 7293.616182962757, 00:32:38.880 "mibps": 28.49068821469827, 00:32:38.880 "io_failed": 0, 00:32:38.880 "io_timeout": 0, 00:32:38.880 "avg_latency_us": 17513.414776112993, 00:32:38.880 "min_latency_us": 5024.426666666666, 00:32:38.880 "max_latency_us": 58642.583703703705 00:32:38.880 } 00:32:38.880 ], 00:32:38.880 "core_count": 1 00:32:38.880 } 00:32:38.880 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:38.880 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:38.880 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:38.880 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:38.880 | .driver_specific 00:32:38.880 | .nvme_error 00:32:38.880 | .status_code 00:32:38.880 | .command_transient_transport_error' 00:32:39.450 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 57 > 0 )) 00:32:39.450 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1333080 00:32:39.450 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1333080 ']' 00:32:39.450 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1333080 00:32:39.450 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:39.450 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:39.450 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1333080 00:32:39.450 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:39.450 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:39.450 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1333080' 00:32:39.450 killing process with pid 1333080 00:32:39.450 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1333080 00:32:39.450 Received shutdown signal, test time was about 2.000000 seconds 00:32:39.450 00:32:39.450 Latency(us) 00:32:39.450 [2024-10-08T16:43:07.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.450 [2024-10-08T16:43:07.987Z] =================================================================================================================== 00:32:39.450 [2024-10-08T16:43:07.987Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:39.450 18:43:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1333080 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1333616 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1333616 /var/tmp/bperf.sock 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1333616 ']' 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:40.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:40.024 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:40.024 [2024-10-08 18:43:08.349039] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:32:40.024 [2024-10-08 18:43:08.349135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333616 ] 00:32:40.024 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:40.024 Zero copy mechanism will not be used. 00:32:40.024 [2024-10-08 18:43:08.418880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.024 [2024-10-08 18:43:08.546251] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.285 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:40.285 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:40.285 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:40.285 18:43:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:40.855 18:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:40.855 18:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.855 18:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:40.855 18:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.855 18:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:40.855 18:43:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:41.793 nvme0n1 00:32:41.793 18:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:41.793 18:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.793 18:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:41.793 18:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.793 18:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:41.793 18:43:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:41.793 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:41.793 Zero copy mechanism will not be used. 00:32:41.793 Running I/O for 2 seconds... 00:32:42.052 [2024-10-08 18:43:10.336444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.052 [2024-10-08 18:43:10.336552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.052 [2024-10-08 18:43:10.336602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.052 [2024-10-08 18:43:10.345503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.052 [2024-10-08 18:43:10.345586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.052 [2024-10-08 18:43:10.345634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.052 [2024-10-08 18:43:10.355338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.052 [2024-10-08 18:43:10.355374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.052 [2024-10-08 18:43:10.355393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.052 [2024-10-08 18:43:10.363919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.052 [2024-10-08 18:43:10.363964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.052 [2024-10-08 18:43:10.363985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.052 [2024-10-08 18:43:10.371898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.052 [2024-10-08 18:43:10.371935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.052 [2024-10-08 18:43:10.371956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.052 [2024-10-08 18:43:10.380052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.052 [2024-10-08 18:43:10.380088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.052 [2024-10-08 18:43:10.380109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.052 [2024-10-08 18:43:10.388577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.052 [2024-10-08 18:43:10.388614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.052 [2024-10-08 18:43:10.388634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.052 [2024-10-08 18:43:10.396549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.052 [2024-10-08 18:43:10.396586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.052 [2024-10-08 18:43:10.396605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.052 [2024-10-08 18:43:10.403528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.052 [2024-10-08 18:43:10.403565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.052 [2024-10-08 18:43:10.403585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.052 [2024-10-08 18:43:10.409692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.052 [2024-10-08 18:43:10.409726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.052 [2024-10-08 18:43:10.409745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.052 [2024-10-08 18:43:10.415384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.052 [2024-10-08 18:43:10.415417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.052 [2024-10-08 18:43:10.415436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.052 [2024-10-08 18:43:10.421164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.052 [2024-10-08 18:43:10.421202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.421221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.426953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.426986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.427005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.432712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.432746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.432765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.438605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.438640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.438667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.444407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.444441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.444460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.450187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.450220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.450239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.456118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.456151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.456170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.462586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.462620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.462639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.468705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.468739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.468758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.476003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.476038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.476074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.483675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.483710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.483730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.491709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.491745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.491764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.500065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.500094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.500109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.507753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.507784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.507820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.514522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.514558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.514589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.521113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.521141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.521172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.528080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.528111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.528143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.533743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.533773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.533805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.539294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.539328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.539359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.545037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.545074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.545105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.551103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.551147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.551165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.557807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.557836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.557869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.565193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.565224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.565242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.572859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.572891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.572908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.580383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.580415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.053 [2024-10-08 18:43:10.580433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.053 [2024-10-08 18:43:10.587069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.053 [2024-10-08 18:43:10.587104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.054 [2024-10-08 18:43:10.587135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.593973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.594003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.594036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.602121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.602167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.602183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.610217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.610246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.610278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.618587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.618620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.618638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.626348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.626380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.626411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.633797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.633829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.633847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.643326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.643355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.643386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.650391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.650421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.650452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.657317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.657346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.657378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.664867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.664898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.664941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.670829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.670858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.670890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.676971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.677000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.677016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.683302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.683330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.683362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.689289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.689317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.689348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.695057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.695085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.695116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.700979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.701008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.701038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.706875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.706920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.706938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.713218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.713263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.713281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.719422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.719451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.719483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.726569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.726598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.726629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.732854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.732883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.732916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.739185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.739213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.739245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.745178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.745205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.745237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.751210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.751237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.751268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.757341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.757369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.757401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.763546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.763574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.763606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.769569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.769597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.769635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.773706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.773734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.773766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.778516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.778544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.778575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.783863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.783892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.313 [2024-10-08 18:43:10.783923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.313 [2024-10-08 18:43:10.789329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.313 [2024-10-08 18:43:10.789355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.314 [2024-10-08 18:43:10.789385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.314 [2024-10-08 18:43:10.794796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.314 [2024-10-08 18:43:10.794825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.314 [2024-10-08 18:43:10.794857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.314 [2024-10-08 18:43:10.800244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.314 [2024-10-08 18:43:10.800271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.314 [2024-10-08 18:43:10.800302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.314 [2024-10-08 18:43:10.805808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.314 [2024-10-08 18:43:10.805836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.314 [2024-10-08 18:43:10.805867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.314 [2024-10-08 18:43:10.811101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.314 [2024-10-08 18:43:10.811128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.314 [2024-10-08 18:43:10.811158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.314 [2024-10-08 18:43:10.816344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.314 [2024-10-08 18:43:10.816381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.314 [2024-10-08 18:43:10.816413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.314 [2024-10-08 18:43:10.821708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.314 [2024-10-08 18:43:10.821735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.314 [2024-10-08 18:43:10.821767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.314 [2024-10-08 18:43:10.827288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.314 [2024-10-08 18:43:10.827315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.314 [2024-10-08 18:43:10.827346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.314 [2024-10-08 18:43:10.833399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.314 [2024-10-08 18:43:10.833427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.314 [2024-10-08 18:43:10.833458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.314 [2024-10-08 18:43:10.841196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.314 [2024-10-08 18:43:10.841224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.314 [2024-10-08 18:43:10.841255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.314 [2024-10-08 18:43:10.848675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.314 [2024-10-08 18:43:10.848710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.314 [2024-10-08 18:43:10.848727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.854570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.854598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.854629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.860380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.860413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.860444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.866969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.867013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.867029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.872543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.872571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.872602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.877784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.877813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.877844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.883153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.883179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.883210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.888437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.888465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.888495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.893744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.893772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.893803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.899038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.899065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.899096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.904291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.904319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.904350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.909616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.909665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.909682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.914907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.914949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.914971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.920130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.920158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.920188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.925462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.925488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.925519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.932098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.932126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.932158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.938224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.938251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.938282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.943766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.943794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.943826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.949019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.949046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.949076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.954429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.954457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.954488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.959718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.959746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.959778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.965815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.965865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.965883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.971490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.971518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.971549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.976795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.976824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.976856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.982260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.982288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.982318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.988168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.988196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.988226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.993419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.993446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.993476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:10.998752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:10.998785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:10.998817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:11.004041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:11.004069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:11.004099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:11.009229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:11.009255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:11.009286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:11.014665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:11.014693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:11.014724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:11.019891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:11.019918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:11.019950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:11.025132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:11.025159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:11.025190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:11.030398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:11.030425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:11.030456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.573 [2024-10-08 18:43:11.036148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.573 [2024-10-08 18:43:11.036177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.573 [2024-10-08 18:43:11.036208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.574 [2024-10-08 18:43:11.041909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.574 [2024-10-08 18:43:11.041937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.574 [2024-10-08 18:43:11.041969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.574 [2024-10-08 18:43:11.047211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.574 [2024-10-08 18:43:11.047238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.574 [2024-10-08 18:43:11.047269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.574 [2024-10-08 18:43:11.053347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.574 [2024-10-08 18:43:11.053374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.574 [2024-10-08 18:43:11.053405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.574 [2024-10-08 18:43:11.058883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.574 [2024-10-08 18:43:11.058920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.574 [2024-10-08 18:43:11.058951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.574 [2024-10-08 18:43:11.065354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.574 [2024-10-08 18:43:11.065383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.574 [2024-10-08 18:43:11.065414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.574 [2024-10-08 18:43:11.073006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.574 [2024-10-08 18:43:11.073034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.574 [2024-10-08 18:43:11.073065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.574 [2024-10-08 18:43:11.079150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.574 [2024-10-08 18:43:11.079178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.574 [2024-10-08 18:43:11.079208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.574 [2024-10-08 18:43:11.085172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.574 [2024-10-08 18:43:11.085200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.574 [2024-10-08 18:43:11.085231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.574 [2024-10-08 18:43:11.091259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.574 [2024-10-08 18:43:11.091286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.574 [2024-10-08 18:43:11.091318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.574 [2024-10-08 18:43:11.097558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.574 [2024-10-08 18:43:11.097586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.574 [2024-10-08 18:43:11.097616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.574 [2024-10-08 18:43:11.103791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.574 [2024-10-08 18:43:11.103821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.574 [2024-10-08 18:43:11.103852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.110156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.110184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.110215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.113609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.113660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.113678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.119322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.119350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.119380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.126051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.126079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.126113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.131556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.131582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.131614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.137904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.137932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.137963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.145331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.145374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.145391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.154436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.154465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.154497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.162213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.162241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.162273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.168305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.168333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.168372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.174052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.174080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.174109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.179889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.179918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.179951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.185801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.185837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.185869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.191434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.191478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.191494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.196829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.196858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.196890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.202084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.202111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.202142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.207362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.207389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.207420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.212459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.212489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.212520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.218096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.218130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.218162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.224010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.224038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.224069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.230148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.833 [2024-10-08 18:43:11.230176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.833 [2024-10-08 18:43:11.230207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.833 [2024-10-08 18:43:11.234575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.234602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.234634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.239853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.239883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.239914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.245190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.245218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.245249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.250273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.250300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.250330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.255590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.255619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.255657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.260740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.260768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.260800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.265886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.265914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.265946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.272579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.272607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.272639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.278254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.278282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.278313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.283785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.283813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.283845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.289690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.289719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.289751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.295280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.295309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.295341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.301979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.302008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.302040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.309394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.309424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.309456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.317290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.317319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.317358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.834 4950.00 IOPS, 618.75 MiB/s [2024-10-08T16:43:11.371Z] [2024-10-08 18:43:11.327549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.327578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.327610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.335130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.335161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.335193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.342890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.342923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.342940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.349438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.349467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.349498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.353976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.354019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.354036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.834 [2024-10-08 18:43:11.362891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:42.834 [2024-10-08 18:43:11.362923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.834 [2024-10-08 18:43:11.362940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.370239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.370269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.370301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.378207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.378237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.378269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.385763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.385794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.385827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.394097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.394127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.394159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.402550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.402587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.402619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.410880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.410914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.410932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.417908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.417988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.418033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.430018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.430092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.430133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.441735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.441808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.441850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.453599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.453698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.453745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.465843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.465917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.465976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.477538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.477611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.477670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.489562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.489638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.489703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.501644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.093 [2024-10-08 18:43:11.501743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.093 [2024-10-08 18:43:11.501787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.093 [2024-10-08 18:43:11.514240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.094 [2024-10-08 18:43:11.514314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.094 [2024-10-08 18:43:11.514356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.094 [2024-10-08 18:43:11.526313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.094 [2024-10-08 18:43:11.526387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.094 [2024-10-08 18:43:11.526430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.094 [2024-10-08 18:43:11.538880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.094 [2024-10-08 18:43:11.538954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.094 [2024-10-08 18:43:11.538994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.094 [2024-10-08 18:43:11.550950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.094 [2024-10-08 18:43:11.551024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.094 [2024-10-08 18:43:11.551066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.094 [2024-10-08 18:43:11.563561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.094 [2024-10-08 18:43:11.563638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.094 [2024-10-08 18:43:11.563700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.094 [2024-10-08 18:43:11.575493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.094 [2024-10-08 18:43:11.575579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.094 [2024-10-08 18:43:11.575623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.094 [2024-10-08 18:43:11.587374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.094 [2024-10-08 18:43:11.587447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.094 [2024-10-08 18:43:11.587487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.094 [2024-10-08 18:43:11.599405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.094 [2024-10-08 18:43:11.599479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.094 [2024-10-08 18:43:11.599520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.094 [2024-10-08 18:43:11.611402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.094 [2024-10-08 18:43:11.611474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.094 [2024-10-08 18:43:11.611515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.094 [2024-10-08 18:43:11.623613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.094 [2024-10-08 18:43:11.623705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.094 [2024-10-08 18:43:11.623750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.635994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.636069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.636110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.648222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.648295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.648337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.660734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.660809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.660851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.673217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.673291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.673332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.686364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.686437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.686479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.698532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.698607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.698647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.711085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.711160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.711202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.722922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.722995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.723037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.735985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.736058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.736100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.748096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.748170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.748211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.761013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.761090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.761134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.773881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.773956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.774000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.787249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.787327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.787391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.800199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.800275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.800316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.812795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.812869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.812911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.824495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.824569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.824612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.836206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.836279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.836320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.847900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.847973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.848013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.859722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.352 [2024-10-08 18:43:11.859796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.352 [2024-10-08 18:43:11.859847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.352 [2024-10-08 18:43:11.871512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.353 [2024-10-08 18:43:11.871585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.353 [2024-10-08 18:43:11.871624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.353 [2024-10-08 18:43:11.884422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.353 [2024-10-08 18:43:11.884496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.353 [2024-10-08 18:43:11.884538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:11.897330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:11.897420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:11.897464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:11.909964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:11.910041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:11.910084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:11.921538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:11.921611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:11.921671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:11.934276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:11.934355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:11.934378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:11.947954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:11.948030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:11.948075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:11.960144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:11.960221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:11.960263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:11.971979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:11.972054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:11.972096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:11.984153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:11.984228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:11.984270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:11.996060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:11.996133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:11.996173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:12.008983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:12.009060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:12.009105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:12.023027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:12.023103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:12.023146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:12.036324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:12.036401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:12.036443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:12.048187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:12.048261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:12.048303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:12.060414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:12.060489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:12.060530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:12.072862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:12.072935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:12.072977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:12.084955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:12.085027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:12.085069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:12.096926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:12.097000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:12.097041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:12.108726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:12.108822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:12.108879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:12.120958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.611 [2024-10-08 18:43:12.121033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.611 [2024-10-08 18:43:12.121075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.611 [2024-10-08 18:43:12.133261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.612 [2024-10-08 18:43:12.133294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.612 [2024-10-08 18:43:12.133313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.612 [2024-10-08 18:43:12.142844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.612 [2024-10-08 18:43:12.142917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.612 [2024-10-08 18:43:12.142959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.871 [2024-10-08 18:43:12.155128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.871 [2024-10-08 18:43:12.155202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.871 [2024-10-08 18:43:12.155244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.871 [2024-10-08 18:43:12.167197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.871 [2024-10-08 18:43:12.167268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.871 [2024-10-08 18:43:12.167310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.871 [2024-10-08 18:43:12.178979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.871 [2024-10-08 18:43:12.179052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.871 [2024-10-08 18:43:12.179094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.871 [2024-10-08 18:43:12.189583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.871 [2024-10-08 18:43:12.189672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.871 [2024-10-08 18:43:12.189718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.871 [2024-10-08 18:43:12.197127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.871 [2024-10-08 18:43:12.197198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.871 [2024-10-08 18:43:12.197238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.871 [2024-10-08 18:43:12.206880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.871 [2024-10-08 18:43:12.206952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.871 [2024-10-08 18:43:12.206993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.871 [2024-10-08 18:43:12.216240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.871 [2024-10-08 18:43:12.216315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.871 [2024-10-08 18:43:12.216356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.871 [2024-10-08 18:43:12.224560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.871 [2024-10-08 18:43:12.224634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.871 [2024-10-08 18:43:12.224701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.871 [2024-10-08 18:43:12.234288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.871 [2024-10-08 18:43:12.234363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.871 [2024-10-08 18:43:12.234404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.871 [2024-10-08 18:43:12.246323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.872 [2024-10-08 18:43:12.246399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.872 [2024-10-08 18:43:12.246441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.872 [2024-10-08 18:43:12.258585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.872 [2024-10-08 18:43:12.258676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.872 [2024-10-08 18:43:12.258722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.872 [2024-10-08 18:43:12.270752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.872 [2024-10-08 18:43:12.270824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.872 [2024-10-08 18:43:12.270869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.872 [2024-10-08 18:43:12.284122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.872 [2024-10-08 18:43:12.284195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.872 [2024-10-08 18:43:12.284236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.872 [2024-10-08 18:43:12.295989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.872 [2024-10-08 18:43:12.296062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.872 [2024-10-08 18:43:12.296117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:43.872 [2024-10-08 18:43:12.308018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.872 [2024-10-08 18:43:12.308089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.872 [2024-10-08 18:43:12.308131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:43.872 [2024-10-08 18:43:12.319837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.872 [2024-10-08 18:43:12.319913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.872 [2024-10-08 18:43:12.319955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:43.872 3842.00 IOPS, 480.25 MiB/s [2024-10-08T16:43:12.409Z] [2024-10-08 18:43:12.335328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6df90) 00:32:43.872 [2024-10-08 18:43:12.335402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.872 [2024-10-08 18:43:12.335445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.872 00:32:43.872 Latency(us) 00:32:43.872 [2024-10-08T16:43:12.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.872 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:43.872 nvme0n1 : 2.01 3833.75 479.22 0.00 0.00 4165.92 1007.31 14660.65 00:32:43.872 [2024-10-08T16:43:12.409Z] =================================================================================================================== 00:32:43.872 [2024-10-08T16:43:12.409Z] Total : 3833.75 479.22 0.00 0.00 4165.92 1007.31 14660.65 00:32:43.872 { 00:32:43.872 "results": [ 00:32:43.872 { 00:32:43.872 "job": "nvme0n1", 00:32:43.872 "core_mask": "0x2", 00:32:43.872 "workload": "randread", 00:32:43.872 "status": "finished", 00:32:43.872 "queue_depth": 16, 00:32:43.872 "io_size": 131072, 00:32:43.872 "runtime": 2.008475, 00:32:43.872 "iops": 3833.7544654526446, 00:32:43.872 "mibps": 479.2193081815806, 00:32:43.872 "io_failed": 0, 00:32:43.872 "io_timeout": 0, 00:32:43.872 "avg_latency_us": 4165.918463876864, 00:32:43.872 "min_latency_us": 1007.3125925925926, 00:32:43.872 "max_latency_us": 14660.645925925926 00:32:43.872 } 00:32:43.872 ], 00:32:43.872 "core_count": 1 00:32:43.872 } 00:32:43.872 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:43.872 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:43.872 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:43.872 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:43.872 | .driver_specific 00:32:43.872 | .nvme_error 00:32:43.872 | .status_code 00:32:43.872 | .command_transient_transport_error' 00:32:44.438 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 248 > 0 )) 00:32:44.438 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1333616 00:32:44.438 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1333616 ']' 00:32:44.438 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1333616 00:32:44.438 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:44.438 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:44.438 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1333616 00:32:44.438 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:44.438 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:44.438 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1333616' 00:32:44.438 killing process with pid 1333616 00:32:44.438 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1333616 00:32:44.438 Received shutdown signal, test time was about 2.000000 seconds 00:32:44.438 00:32:44.438 Latency(us) 00:32:44.438 [2024-10-08T16:43:12.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.438 [2024-10-08T16:43:12.975Z] =================================================================================================================== 00:32:44.438 [2024-10-08T16:43:12.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:44.438 18:43:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1333616 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1334157 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1334157 /var/tmp/bperf.sock 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1334157 ']' 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:45.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:45.004 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:45.004 [2024-10-08 18:43:13.300621] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:32:45.004 [2024-10-08 18:43:13.300748] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334157 ] 00:32:45.004 [2024-10-08 18:43:13.369912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.004 [2024-10-08 18:43:13.479753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.262 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:45.262 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:45.262 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:45.262 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:45.522 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:45.522 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.522 18:43:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:45.522 18:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.522 18:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.522 18:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:46.460 nvme0n1 00:32:46.460 18:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:46.460 18:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.460 18:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:46.460 18:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.460 18:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:46.460 18:43:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:46.460 Running I/O for 2 seconds... 00:32:46.460 [2024-10-08 18:43:14.930190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.460 [2024-10-08 18:43:14.930524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.460 [2024-10-08 18:43:14.930565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.460 [2024-10-08 18:43:14.945060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.460 [2024-10-08 18:43:14.945371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.460 [2024-10-08 18:43:14.945404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.460 [2024-10-08 18:43:14.959848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.460 [2024-10-08 18:43:14.960154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.460 [2024-10-08 18:43:14.960187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.460 [2024-10-08 18:43:14.974732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.460 [2024-10-08 18:43:14.975060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.460 [2024-10-08 18:43:14.975093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.460 [2024-10-08 18:43:14.989555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.460 [2024-10-08 18:43:14.989889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.460 [2024-10-08 18:43:14.989929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.718 [2024-10-08 18:43:15.004505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.718 [2024-10-08 18:43:15.004827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.718 [2024-10-08 18:43:15.004860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.718 [2024-10-08 18:43:15.019144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.718 [2024-10-08 18:43:15.019456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.019489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.033741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.034057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.034089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.048425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.048728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.048761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.063137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.063443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.063476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.077769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.078060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.078092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.092435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.092747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.092779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.107072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.107366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.107397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.121688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.122007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.122038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.136402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.136732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.136764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.151029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.151330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.151361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.165691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.165989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.166020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.180374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.180700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.180732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.195020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.195316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.195348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.209681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.209948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.209979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.224357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.224669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.224700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.239044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.239338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.239369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.719 [2024-10-08 18:43:15.253798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.719 [2024-10-08 18:43:15.254095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.719 [2024-10-08 18:43:15.254126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.268613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.268931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.268963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.283283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.283597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.283628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.297988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.298283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.298314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.312827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.313141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.313172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.327222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.327474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.327504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.341597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.341859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.341890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.355980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.356300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.356332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.370600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.370906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.370938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.385352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.385693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.385724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.399955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.400220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.400250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.414323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.414576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.414606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.428808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.429067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.429098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.443394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.443714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.443745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.456804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.457106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.457133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.470079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.470366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.470392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.482568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.482800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.482827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.495215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.495428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.495470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:46.978 [2024-10-08 18:43:15.507897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:46.978 [2024-10-08 18:43:15.508184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.978 [2024-10-08 18:43:15.508211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.237 [2024-10-08 18:43:15.520629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.237 [2024-10-08 18:43:15.520895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.237 [2024-10-08 18:43:15.520923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.237 [2024-10-08 18:43:15.533245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.237 [2024-10-08 18:43:15.533526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.237 [2024-10-08 18:43:15.533552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.237 [2024-10-08 18:43:15.545870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.237 [2024-10-08 18:43:15.546171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.237 [2024-10-08 18:43:15.546197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.237 [2024-10-08 18:43:15.558515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.237 [2024-10-08 18:43:15.558838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.237 [2024-10-08 18:43:15.558866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.237 [2024-10-08 18:43:15.571204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.237 [2024-10-08 18:43:15.571477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.237 [2024-10-08 18:43:15.571504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.237 [2024-10-08 18:43:15.583822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.237 [2024-10-08 18:43:15.584112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.237 [2024-10-08 18:43:15.584138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.237 [2024-10-08 18:43:15.596362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.237 [2024-10-08 18:43:15.596656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.237 [2024-10-08 18:43:15.596683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.237 [2024-10-08 18:43:15.609088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.237 [2024-10-08 18:43:15.609375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.237 [2024-10-08 18:43:15.609400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.237 [2024-10-08 18:43:15.621719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.237 [2024-10-08 18:43:15.622010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.237 [2024-10-08 18:43:15.622036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.237 [2024-10-08 18:43:15.634304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.238 [2024-10-08 18:43:15.634594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.238 [2024-10-08 18:43:15.634620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.238 [2024-10-08 18:43:15.646902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.238 [2024-10-08 18:43:15.647211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.238 [2024-10-08 18:43:15.647237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.238 [2024-10-08 18:43:15.659496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.238 [2024-10-08 18:43:15.659831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.238 [2024-10-08 18:43:15.659900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.238 [2024-10-08 18:43:15.691881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.238 [2024-10-08 18:43:15.692440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.238 [2024-10-08 18:43:15.692508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.238 [2024-10-08 18:43:15.724393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.238 [2024-10-08 18:43:15.724973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.238 [2024-10-08 18:43:15.725042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.238 [2024-10-08 18:43:15.756971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.238 [2024-10-08 18:43:15.757533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.238 [2024-10-08 18:43:15.757603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.496 [2024-10-08 18:43:15.789915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.496 [2024-10-08 18:43:15.790482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.496 [2024-10-08 18:43:15.790557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.496 [2024-10-08 18:43:15.822473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.496 [2024-10-08 18:43:15.823054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.496 [2024-10-08 18:43:15.823124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.496 [2024-10-08 18:43:15.855066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.496 [2024-10-08 18:43:15.855625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.496 [2024-10-08 18:43:15.855711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.496 [2024-10-08 18:43:15.887275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.496 [2024-10-08 18:43:15.887844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.496 [2024-10-08 18:43:15.887917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.496 15489.00 IOPS, 60.50 MiB/s [2024-10-08T16:43:16.033Z] [2024-10-08 18:43:15.919596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.496 [2024-10-08 18:43:15.920068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.496 [2024-10-08 18:43:15.920140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.496 [2024-10-08 18:43:15.951342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.496 [2024-10-08 18:43:15.951890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.496 [2024-10-08 18:43:15.951957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.496 [2024-10-08 18:43:15.983807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.496 [2024-10-08 18:43:15.984304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.496 [2024-10-08 18:43:15.984373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.496 [2024-10-08 18:43:16.015734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.496 [2024-10-08 18:43:16.016285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.496 [2024-10-08 18:43:16.016363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.754 [2024-10-08 18:43:16.048533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.754 [2024-10-08 18:43:16.049107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.754 [2024-10-08 18:43:16.049175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.754 [2024-10-08 18:43:16.081215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.754 [2024-10-08 18:43:16.081769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.754 [2024-10-08 18:43:16.081859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.754 [2024-10-08 18:43:16.113721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.754 [2024-10-08 18:43:16.114274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.754 [2024-10-08 18:43:16.114342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.754 [2024-10-08 18:43:16.146260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.754 [2024-10-08 18:43:16.146830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.754 [2024-10-08 18:43:16.146898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.754 [2024-10-08 18:43:16.178845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.754 [2024-10-08 18:43:16.179405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.754 [2024-10-08 18:43:16.179474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.754 [2024-10-08 18:43:16.211316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.754 [2024-10-08 18:43:16.211898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.754 [2024-10-08 18:43:16.211967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.754 [2024-10-08 18:43:16.243782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.754 [2024-10-08 18:43:16.244339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.754 [2024-10-08 18:43:16.244408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:47.754 [2024-10-08 18:43:16.276344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:47.754 [2024-10-08 18:43:16.276929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.754 [2024-10-08 18:43:16.277001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.013 [2024-10-08 18:43:16.309264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.013 [2024-10-08 18:43:16.309846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.013 [2024-10-08 18:43:16.309916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.013 [2024-10-08 18:43:16.342116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.013 [2024-10-08 18:43:16.342691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.014 [2024-10-08 18:43:16.342759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.014 [2024-10-08 18:43:16.374707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.014 [2024-10-08 18:43:16.375271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.014 [2024-10-08 18:43:16.375340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.014 [2024-10-08 18:43:16.407271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.014 [2024-10-08 18:43:16.407813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.014 [2024-10-08 18:43:16.407883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.014 [2024-10-08 18:43:16.439773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.014 [2024-10-08 18:43:16.440315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.014 [2024-10-08 18:43:16.440382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.014 [2024-10-08 18:43:16.472400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.014 [2024-10-08 18:43:16.472975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.014 [2024-10-08 18:43:16.473045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.014 [2024-10-08 18:43:16.503199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.014 [2024-10-08 18:43:16.503736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.014 [2024-10-08 18:43:16.503768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.014 [2024-10-08 18:43:16.529186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.014 [2024-10-08 18:43:16.529732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.014 [2024-10-08 18:43:16.529771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.272 [2024-10-08 18:43:16.560844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.272 [2024-10-08 18:43:16.561402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.272 [2024-10-08 18:43:16.561475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.272 [2024-10-08 18:43:16.593468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.272 [2024-10-08 18:43:16.594054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.272 [2024-10-08 18:43:16.594125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.272 [2024-10-08 18:43:16.625681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.272 [2024-10-08 18:43:16.626233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.272 [2024-10-08 18:43:16.626303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.272 [2024-10-08 18:43:16.657334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.272 [2024-10-08 18:43:16.657861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.272 [2024-10-08 18:43:16.657927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.272 [2024-10-08 18:43:16.682605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.272 [2024-10-08 18:43:16.682941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.272 [2024-10-08 18:43:16.683010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.272 [2024-10-08 18:43:16.701716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.272 [2024-10-08 18:43:16.702013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.272 [2024-10-08 18:43:16.702044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.272 [2024-10-08 18:43:16.716317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.272 [2024-10-08 18:43:16.716627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.272 [2024-10-08 18:43:16.716664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.272 [2024-10-08 18:43:16.730876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.272 [2024-10-08 18:43:16.731173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.272 [2024-10-08 18:43:16.731203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.272 [2024-10-08 18:43:16.745495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.272 [2024-10-08 18:43:16.745746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.272 [2024-10-08 18:43:16.745782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.272 [2024-10-08 18:43:16.760137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.272 [2024-10-08 18:43:16.760440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.272 [2024-10-08 18:43:16.760472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.272 [2024-10-08 18:43:16.780580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.272 [2024-10-08 18:43:16.781148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.272 [2024-10-08 18:43:16.781219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.530 [2024-10-08 18:43:16.813489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.530 [2024-10-08 18:43:16.814069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.530 [2024-10-08 18:43:16.814143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.530 [2024-10-08 18:43:16.846133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.530 [2024-10-08 18:43:16.846702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.530 [2024-10-08 18:43:16.846773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.530 [2024-10-08 18:43:16.878711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.530 [2024-10-08 18:43:16.879263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.530 [2024-10-08 18:43:16.879331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.530 [2024-10-08 18:43:16.911275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cbe0) with pdu=0x2000198fda78 00:32:48.530 [2024-10-08 18:43:16.911818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:48.530 [2024-10-08 18:43:16.911887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:48.530 12123.00 IOPS, 47.36 MiB/s 00:32:48.530 Latency(us) 00:32:48.530 [2024-10-08T16:43:17.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.530 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:48.530 nvme0n1 : 2.02 12080.52 47.19 0.00 0.00 10548.76 6092.42 33204.91 00:32:48.530 [2024-10-08T16:43:17.067Z] =================================================================================================================== 00:32:48.530 [2024-10-08T16:43:17.067Z] Total : 12080.52 47.19 0.00 0.00 10548.76 6092.42 33204.91 00:32:48.530 { 00:32:48.530 "results": [ 00:32:48.531 { 00:32:48.531 "job": "nvme0n1", 00:32:48.531 "core_mask": "0x2", 00:32:48.531 "workload": "randwrite", 00:32:48.531 "status": "finished", 00:32:48.531 "queue_depth": 128, 00:32:48.531 "io_size": 4096, 00:32:48.531 "runtime": 2.020278, 00:32:48.531 "iops": 12080.515651806336, 00:32:48.531 "mibps": 47.1895142648685, 00:32:48.531 "io_failed": 0, 00:32:48.531 "io_timeout": 0, 00:32:48.531 "avg_latency_us": 10548.761728900909, 00:32:48.531 "min_latency_us": 6092.420740740741, 00:32:48.531 "max_latency_us": 33204.90666666667 00:32:48.531 } 00:32:48.531 ], 00:32:48.531 "core_count": 1 00:32:48.531 } 00:32:48.531 18:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:48.531 18:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:48.531 18:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:48.531 | .driver_specific 00:32:48.531 | .nvme_error 00:32:48.531 | .status_code 00:32:48.531 | .command_transient_transport_error' 00:32:48.531 18:43:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:49.097 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 95 > 0 )) 00:32:49.097 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1334157 00:32:49.097 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1334157 ']' 00:32:49.097 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1334157 00:32:49.097 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:49.097 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:49.097 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1334157 00:32:49.355 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:49.355 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:49.355 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1334157' 00:32:49.355 killing process with pid 1334157 00:32:49.355 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1334157 00:32:49.355 Received shutdown signal, test time was about 2.000000 seconds 00:32:49.355 00:32:49.355 Latency(us) 00:32:49.355 [2024-10-08T16:43:17.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.355 [2024-10-08T16:43:17.892Z] =================================================================================================================== 00:32:49.355 [2024-10-08T16:43:17.892Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:49.355 18:43:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1334157 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1334695 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1334695 /var/tmp/bperf.sock 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1334695 ']' 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:49.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:49.613 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:49.613 [2024-10-08 18:43:18.072745] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:32:49.613 [2024-10-08 18:43:18.072849] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334695 ] 00:32:49.613 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:49.613 Zero copy mechanism will not be used. 00:32:49.871 [2024-10-08 18:43:18.181131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.871 [2024-10-08 18:43:18.357912] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.129 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:50.129 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:32:50.129 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:50.129 18:43:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:50.695 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:50.695 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.695 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:50.695 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.695 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:50.695 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:51.261 nvme0n1 00:32:51.261 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:51.261 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.261 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:51.261 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.261 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:51.261 18:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:51.521 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:51.521 Zero copy mechanism will not be used. 00:32:51.521 Running I/O for 2 seconds... 00:32:51.521 [2024-10-08 18:43:19.926868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.521 [2024-10-08 18:43:19.927686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.521 [2024-10-08 18:43:19.927774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.521 [2024-10-08 18:43:19.940908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.521 [2024-10-08 18:43:19.941700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.521 [2024-10-08 18:43:19.941776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.521 [2024-10-08 18:43:19.954896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.521 [2024-10-08 18:43:19.955684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.521 [2024-10-08 18:43:19.955758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.521 [2024-10-08 18:43:19.968900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.521 [2024-10-08 18:43:19.969684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.521 [2024-10-08 18:43:19.969755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.521 [2024-10-08 18:43:19.982748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.521 [2024-10-08 18:43:19.983482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.521 [2024-10-08 18:43:19.983555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.521 [2024-10-08 18:43:19.996591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.521 [2024-10-08 18:43:19.997310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.521 [2024-10-08 18:43:19.997384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.521 [2024-10-08 18:43:20.006788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.521 [2024-10-08 18:43:20.007414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.521 [2024-10-08 18:43:20.007494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.521 [2024-10-08 18:43:20.019781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.521 [2024-10-08 18:43:20.020484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.521 [2024-10-08 18:43:20.020561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.521 [2024-10-08 18:43:20.034304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.521 [2024-10-08 18:43:20.035099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.521 [2024-10-08 18:43:20.035181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.521 [2024-10-08 18:43:20.053956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.521 [2024-10-08 18:43:20.054836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.521 [2024-10-08 18:43:20.054928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.779 [2024-10-08 18:43:20.070600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.779 [2024-10-08 18:43:20.071378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.779 [2024-10-08 18:43:20.071457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.779 [2024-10-08 18:43:20.084536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.779 [2024-10-08 18:43:20.085293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.779 [2024-10-08 18:43:20.085374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.779 [2024-10-08 18:43:20.098522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.779 [2024-10-08 18:43:20.099279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.779 [2024-10-08 18:43:20.099356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.779 [2024-10-08 18:43:20.112480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.779 [2024-10-08 18:43:20.113226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.779 [2024-10-08 18:43:20.113302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.779 [2024-10-08 18:43:20.126623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.779 [2024-10-08 18:43:20.127421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.779 [2024-10-08 18:43:20.127498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.779 [2024-10-08 18:43:20.140850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.779 [2024-10-08 18:43:20.141621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.779 [2024-10-08 18:43:20.141714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.779 [2024-10-08 18:43:20.155112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.779 [2024-10-08 18:43:20.155910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.780 [2024-10-08 18:43:20.155988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.780 [2024-10-08 18:43:20.169464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.780 [2024-10-08 18:43:20.170256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.780 [2024-10-08 18:43:20.170332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.780 [2024-10-08 18:43:20.183603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.780 [2024-10-08 18:43:20.184412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.780 [2024-10-08 18:43:20.184488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.780 [2024-10-08 18:43:20.197713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.780 [2024-10-08 18:43:20.198487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.780 [2024-10-08 18:43:20.198563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.780 [2024-10-08 18:43:20.211875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.780 [2024-10-08 18:43:20.212667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.780 [2024-10-08 18:43:20.212743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.780 [2024-10-08 18:43:20.226129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.780 [2024-10-08 18:43:20.226932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.780 [2024-10-08 18:43:20.227045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.780 [2024-10-08 18:43:20.240306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.780 [2024-10-08 18:43:20.241110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.780 [2024-10-08 18:43:20.241187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.780 [2024-10-08 18:43:20.254573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.780 [2024-10-08 18:43:20.255364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.780 [2024-10-08 18:43:20.255440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:51.780 [2024-10-08 18:43:20.268869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.780 [2024-10-08 18:43:20.269646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.780 [2024-10-08 18:43:20.269738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:51.780 [2024-10-08 18:43:20.283078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.780 [2024-10-08 18:43:20.283881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.780 [2024-10-08 18:43:20.283957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:51.780 [2024-10-08 18:43:20.297230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.780 [2024-10-08 18:43:20.298020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.780 [2024-10-08 18:43:20.298095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:51.780 [2024-10-08 18:43:20.311422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:51.780 [2024-10-08 18:43:20.312214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.780 [2024-10-08 18:43:20.312291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.038 [2024-10-08 18:43:20.325525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.038 [2024-10-08 18:43:20.326335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.038 [2024-10-08 18:43:20.326412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.038 [2024-10-08 18:43:20.339631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.038 [2024-10-08 18:43:20.340450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.038 [2024-10-08 18:43:20.340525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.038 [2024-10-08 18:43:20.353740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.038 [2024-10-08 18:43:20.354538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.038 [2024-10-08 18:43:20.354614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.038 [2024-10-08 18:43:20.368500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.038 [2024-10-08 18:43:20.369274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.038 [2024-10-08 18:43:20.369350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.038 [2024-10-08 18:43:20.382587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.038 [2024-10-08 18:43:20.383382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.038 [2024-10-08 18:43:20.383459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.038 [2024-10-08 18:43:20.396576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.038 [2024-10-08 18:43:20.397347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.038 [2024-10-08 18:43:20.397423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.038 [2024-10-08 18:43:20.410607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.038 [2024-10-08 18:43:20.411352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.038 [2024-10-08 18:43:20.411428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.038 [2024-10-08 18:43:20.424696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.038 [2024-10-08 18:43:20.425501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.038 [2024-10-08 18:43:20.425578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.038 [2024-10-08 18:43:20.438786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.038 [2024-10-08 18:43:20.439589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.038 [2024-10-08 18:43:20.439681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.038 [2024-10-08 18:43:20.453003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.038 [2024-10-08 18:43:20.453736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.038 [2024-10-08 18:43:20.453812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.038 [2024-10-08 18:43:20.467047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.038 [2024-10-08 18:43:20.467851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.038 [2024-10-08 18:43:20.467925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.038 [2024-10-08 18:43:20.481128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.038 [2024-10-08 18:43:20.481961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.038 [2024-10-08 18:43:20.482039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.038 [2024-10-08 18:43:20.495327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.038 [2024-10-08 18:43:20.496110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.039 [2024-10-08 18:43:20.496187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.039 [2024-10-08 18:43:20.509435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.039 [2024-10-08 18:43:20.510223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.039 [2024-10-08 18:43:20.510300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.039 [2024-10-08 18:43:20.523895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.039 [2024-10-08 18:43:20.524681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.039 [2024-10-08 18:43:20.524757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.039 [2024-10-08 18:43:20.538263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.039 [2024-10-08 18:43:20.539054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.039 [2024-10-08 18:43:20.539129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.039 [2024-10-08 18:43:20.552548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.039 [2024-10-08 18:43:20.553333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.039 [2024-10-08 18:43:20.553409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.039 [2024-10-08 18:43:20.567207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.039 [2024-10-08 18:43:20.568011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.039 [2024-10-08 18:43:20.568090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.297 [2024-10-08 18:43:20.579747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.297 [2024-10-08 18:43:20.580324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.297 [2024-10-08 18:43:20.580403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.297 [2024-10-08 18:43:20.591378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.297 [2024-10-08 18:43:20.591914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.297 [2024-10-08 18:43:20.591983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.297 [2024-10-08 18:43:20.603011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.297 [2024-10-08 18:43:20.603691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.297 [2024-10-08 18:43:20.603760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.297 [2024-10-08 18:43:20.614791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.297 [2024-10-08 18:43:20.615351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.297 [2024-10-08 18:43:20.615427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.297 [2024-10-08 18:43:20.626901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.297 [2024-10-08 18:43:20.627618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.297 [2024-10-08 18:43:20.627724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.297 [2024-10-08 18:43:20.638836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.297 [2024-10-08 18:43:20.639500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.297 [2024-10-08 18:43:20.639575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.297 [2024-10-08 18:43:20.650192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.297 [2024-10-08 18:43:20.650800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.297 [2024-10-08 18:43:20.650835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.297 [2024-10-08 18:43:20.661817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.297 [2024-10-08 18:43:20.662471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.297 [2024-10-08 18:43:20.662547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.297 [2024-10-08 18:43:20.673523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.297 [2024-10-08 18:43:20.674002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.297 [2024-10-08 18:43:20.674078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.297 [2024-10-08 18:43:20.684824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.297 [2024-10-08 18:43:20.685384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.297 [2024-10-08 18:43:20.685459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.297 [2024-10-08 18:43:20.696415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.297 [2024-10-08 18:43:20.696897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.297 [2024-10-08 18:43:20.696955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.297 [2024-10-08 18:43:20.707844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.297 [2024-10-08 18:43:20.708551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.297 [2024-10-08 18:43:20.708625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.297 [2024-10-08 18:43:20.720679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.298 [2024-10-08 18:43:20.721406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.298 [2024-10-08 18:43:20.721482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.298 [2024-10-08 18:43:20.734675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.298 [2024-10-08 18:43:20.735432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.298 [2024-10-08 18:43:20.735505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.298 [2024-10-08 18:43:20.748743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.298 [2024-10-08 18:43:20.749489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.298 [2024-10-08 18:43:20.749563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.298 [2024-10-08 18:43:20.762918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.298 [2024-10-08 18:43:20.763718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.298 [2024-10-08 18:43:20.763792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.298 [2024-10-08 18:43:20.777160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.298 [2024-10-08 18:43:20.777956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.298 [2024-10-08 18:43:20.778031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.298 [2024-10-08 18:43:20.791500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.298 [2024-10-08 18:43:20.792265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.298 [2024-10-08 18:43:20.792341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.298 [2024-10-08 18:43:20.805762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.298 [2024-10-08 18:43:20.806536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.298 [2024-10-08 18:43:20.806623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.298 [2024-10-08 18:43:20.820012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.298 [2024-10-08 18:43:20.820808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.298 [2024-10-08 18:43:20.820885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.557 [2024-10-08 18:43:20.834587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.557 [2024-10-08 18:43:20.835392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.557 [2024-10-08 18:43:20.835467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.557 [2024-10-08 18:43:20.849266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.557 [2024-10-08 18:43:20.850063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.557 [2024-10-08 18:43:20.850138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.557 [2024-10-08 18:43:20.863492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.557 [2024-10-08 18:43:20.864267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.557 [2024-10-08 18:43:20.864343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.557 [2024-10-08 18:43:20.878050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.557 [2024-10-08 18:43:20.878844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.557 [2024-10-08 18:43:20.878920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.557 [2024-10-08 18:43:20.892233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.557 [2024-10-08 18:43:20.893028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.557 [2024-10-08 18:43:20.893104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.557 [2024-10-08 18:43:20.906563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.557 [2024-10-08 18:43:20.907350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.557 [2024-10-08 18:43:20.907426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.557 2241.00 IOPS, 280.12 MiB/s [2024-10-08T16:43:21.094Z] [2024-10-08 18:43:20.923309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.557 [2024-10-08 18:43:20.924072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.557 [2024-10-08 18:43:20.924149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.557 [2024-10-08 18:43:20.937766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.557 [2024-10-08 18:43:20.938580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.557 [2024-10-08 18:43:20.938679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.557 [2024-10-08 18:43:20.951908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.557 [2024-10-08 18:43:20.952734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.557 [2024-10-08 18:43:20.952811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.557 [2024-10-08 18:43:20.966031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.557 [2024-10-08 18:43:20.966861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.557 [2024-10-08 18:43:20.966937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.557 [2024-10-08 18:43:20.980176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.557 [2024-10-08 18:43:20.980976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.558 [2024-10-08 18:43:20.981050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.558 [2024-10-08 18:43:20.994108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.558 [2024-10-08 18:43:20.994893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.558 [2024-10-08 18:43:20.994967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.558 [2024-10-08 18:43:21.008309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.558 [2024-10-08 18:43:21.009025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.558 [2024-10-08 18:43:21.009100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.558 [2024-10-08 18:43:21.022292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.558 [2024-10-08 18:43:21.023115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.558 [2024-10-08 18:43:21.023191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.558 [2024-10-08 18:43:21.036405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.558 [2024-10-08 18:43:21.037208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.558 [2024-10-08 18:43:21.037285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.558 [2024-10-08 18:43:21.050347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.558 [2024-10-08 18:43:21.051173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.558 [2024-10-08 18:43:21.051250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.558 [2024-10-08 18:43:21.064593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.558 [2024-10-08 18:43:21.065394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.558 [2024-10-08 18:43:21.065470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.558 [2024-10-08 18:43:21.078740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.558 [2024-10-08 18:43:21.079531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.558 [2024-10-08 18:43:21.079606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.558 [2024-10-08 18:43:21.093120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.558 [2024-10-08 18:43:21.093921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.558 [2024-10-08 18:43:21.093999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.107599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.108407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.108484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.121553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.122327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.122403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.135677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.136414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.136488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.149380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.150130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.150204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.163195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.163942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.164014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.176713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.177438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.177524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.190295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.191043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.191115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.203753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.204480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.204554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.217682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.218465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.218538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.231431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.232214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.232286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.245094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.245833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.245904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.258596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.259338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.259410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.272099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.272837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.272910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.286051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.286795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.286867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.299845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.300560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.300632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.313734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.314464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.314546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.327346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.328136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.328208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:52.816 [2024-10-08 18:43:21.341279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:52.816 [2024-10-08 18:43:21.342064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.816 [2024-10-08 18:43:21.342135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.074 [2024-10-08 18:43:21.355424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.074 [2024-10-08 18:43:21.356228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.074 [2024-10-08 18:43:21.356300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.074 [2024-10-08 18:43:21.369270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.074 [2024-10-08 18:43:21.370071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.074 [2024-10-08 18:43:21.370142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.074 [2024-10-08 18:43:21.383153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.074 [2024-10-08 18:43:21.383941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.074 [2024-10-08 18:43:21.384012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.074 [2024-10-08 18:43:21.397046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.074 [2024-10-08 18:43:21.397830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.074 [2024-10-08 18:43:21.397901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.074 [2024-10-08 18:43:21.410929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.411731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.411803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.424911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.425699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.425772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.438973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.439800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.439871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.453086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.453868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.453941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.467177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.467964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.468035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.481128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.481921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.481992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.495137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.495931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.496000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.508980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.509769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.509841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.522922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.523702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.523772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.536806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.537537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.537620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.550567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.551359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.551432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.564395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.565177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.565248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.578434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.579215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.579286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.592292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.593070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.593142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.075 [2024-10-08 18:43:21.606199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.075 [2024-10-08 18:43:21.606958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.075 [2024-10-08 18:43:21.607029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.333 [2024-10-08 18:43:21.620261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.333 [2024-10-08 18:43:21.621053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.333 [2024-10-08 18:43:21.621124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.333 [2024-10-08 18:43:21.634178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.333 [2024-10-08 18:43:21.634968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.333 [2024-10-08 18:43:21.635039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.333 [2024-10-08 18:43:21.648039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.333 [2024-10-08 18:43:21.648816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.333 [2024-10-08 18:43:21.648887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.333 [2024-10-08 18:43:21.661898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.333 [2024-10-08 18:43:21.662685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.333 [2024-10-08 18:43:21.662756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.333 [2024-10-08 18:43:21.675767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.333 [2024-10-08 18:43:21.676536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.333 [2024-10-08 18:43:21.676607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.333 [2024-10-08 18:43:21.689783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.333 [2024-10-08 18:43:21.690560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.333 [2024-10-08 18:43:21.690630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.333 [2024-10-08 18:43:21.703693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.333 [2024-10-08 18:43:21.704458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.333 [2024-10-08 18:43:21.704530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.333 [2024-10-08 18:43:21.717437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.333 [2024-10-08 18:43:21.718235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.333 [2024-10-08 18:43:21.718308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.333 [2024-10-08 18:43:21.731322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.333 [2024-10-08 18:43:21.732038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.333 [2024-10-08 18:43:21.732112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.333 [2024-10-08 18:43:21.745090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.333 [2024-10-08 18:43:21.745840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.333 [2024-10-08 18:43:21.745912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.333 [2024-10-08 18:43:21.758764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.333 [2024-10-08 18:43:21.759461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.333 [2024-10-08 18:43:21.759533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.333 [2024-10-08 18:43:21.771885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.333 [2024-10-08 18:43:21.772613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.333 [2024-10-08 18:43:21.772722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.333 [2024-10-08 18:43:21.785583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.333 [2024-10-08 18:43:21.786302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.333 [2024-10-08 18:43:21.786375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.334 [2024-10-08 18:43:21.799476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.334 [2024-10-08 18:43:21.800204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.334 [2024-10-08 18:43:21.800275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.334 [2024-10-08 18:43:21.813259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.334 [2024-10-08 18:43:21.813978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.334 [2024-10-08 18:43:21.814049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.334 [2024-10-08 18:43:21.826985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.334 [2024-10-08 18:43:21.827708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.334 [2024-10-08 18:43:21.827780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.334 [2024-10-08 18:43:21.840852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.334 [2024-10-08 18:43:21.841573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.334 [2024-10-08 18:43:21.841645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.334 [2024-10-08 18:43:21.854517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.334 [2024-10-08 18:43:21.855223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.334 [2024-10-08 18:43:21.855296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.334 [2024-10-08 18:43:21.868421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.334 [2024-10-08 18:43:21.869166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.334 [2024-10-08 18:43:21.869238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.591 [2024-10-08 18:43:21.882383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.591 [2024-10-08 18:43:21.883156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.591 [2024-10-08 18:43:21.883226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.591 [2024-10-08 18:43:21.896212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.591 [2024-10-08 18:43:21.897014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.591 [2024-10-08 18:43:21.897085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.591 [2024-10-08 18:43:21.910219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x197cf20) with pdu=0x2000198fef90 00:32:53.591 [2024-10-08 18:43:21.911004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.591 [2024-10-08 18:43:21.911077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.591 2235.00 IOPS, 279.38 MiB/s 00:32:53.592 Latency(us) 00:32:53.592 [2024-10-08T16:43:22.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.592 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:53.592 nvme0n1 : 2.01 2233.41 279.18 0.00 0.00 7140.70 3495.25 19029.71 00:32:53.592 [2024-10-08T16:43:22.129Z] =================================================================================================================== 00:32:53.592 [2024-10-08T16:43:22.129Z] Total : 2233.41 279.18 0.00 0.00 7140.70 3495.25 19029.71 00:32:53.592 { 00:32:53.592 "results": [ 00:32:53.592 { 00:32:53.592 "job": "nvme0n1", 00:32:53.592 "core_mask": "0x2", 00:32:53.592 "workload": "randwrite", 00:32:53.592 "status": "finished", 00:32:53.592 "queue_depth": 16, 00:32:53.592 "io_size": 131072, 00:32:53.592 "runtime": 2.010828, 00:32:53.592 "iops": 2233.4083273159117, 00:32:53.592 "mibps": 279.17604091448896, 00:32:53.592 "io_failed": 0, 00:32:53.592 "io_timeout": 0, 00:32:53.592 "avg_latency_us": 7140.695697897853, 00:32:53.592 "min_latency_us": 3495.2533333333336, 00:32:53.592 "max_latency_us": 19029.712592592594 00:32:53.592 } 00:32:53.592 ], 00:32:53.592 "core_count": 1 00:32:53.592 } 00:32:53.592 18:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:53.592 18:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:53.592 18:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:53.592 | .driver_specific 00:32:53.592 | .nvme_error 00:32:53.592 | .status_code 00:32:53.592 | .command_transient_transport_error' 00:32:53.592 18:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:53.850 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:32:53.850 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1334695 00:32:53.850 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1334695 ']' 00:32:53.850 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1334695 00:32:53.850 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:53.850 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:53.850 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1334695 00:32:53.850 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:53.850 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:53.850 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1334695' 00:32:53.850 killing process with pid 1334695 00:32:53.850 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1334695 00:32:53.850 Received shutdown signal, test time was about 2.000000 seconds 00:32:53.850 00:32:53.850 Latency(us) 00:32:53.850 [2024-10-08T16:43:22.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.850 [2024-10-08T16:43:22.387Z] =================================================================================================================== 00:32:53.850 [2024-10-08T16:43:22.387Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:53.850 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1334695 00:32:54.416 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1332923 00:32:54.416 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1332923 ']' 00:32:54.416 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1332923 00:32:54.416 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:32:54.416 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:54.416 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1332923 00:32:54.416 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:54.416 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:54.416 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1332923' 00:32:54.416 killing process with pid 1332923 00:32:54.416 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1332923 00:32:54.416 18:43:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1332923 00:32:54.676 00:32:54.676 real 0m20.963s 00:32:54.676 user 0m44.036s 00:32:54.676 sys 0m5.694s 00:32:54.676 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:54.676 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:54.676 ************************************ 00:32:54.676 END TEST nvmf_digest_error 00:32:54.676 ************************************ 00:32:54.676 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:54.676 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:54.676 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:54.676 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:32:54.676 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:54.676 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:32:54.676 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:54.676 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:54.676 rmmod nvme_tcp 00:32:54.676 rmmod nvme_fabrics 00:32:54.676 rmmod nvme_keyring 00:32:54.676 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 1332923 ']' 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 1332923 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1332923 ']' 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1332923 00:32:54.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1332923) - No such process 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1332923 is not found' 00:32:54.936 Process with pid 1332923 is not found 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.936 18:43:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.844 18:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:56.844 00:32:56.844 real 0m48.292s 00:32:56.844 user 1m30.464s 00:32:56.844 sys 0m13.820s 00:32:56.844 18:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:56.844 18:43:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:56.844 ************************************ 00:32:56.844 END TEST nvmf_digest 00:32:56.844 ************************************ 00:32:56.844 18:43:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:56.844 18:43:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:56.844 18:43:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:56.844 18:43:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:56.844 18:43:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:56.844 18:43:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:56.844 18:43:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.844 ************************************ 00:32:56.844 START TEST nvmf_bdevperf 00:32:56.844 ************************************ 00:32:56.844 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:57.104 * Looking for test storage... 00:32:57.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:57.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.104 --rc genhtml_branch_coverage=1 00:32:57.104 --rc genhtml_function_coverage=1 00:32:57.104 --rc genhtml_legend=1 00:32:57.104 --rc geninfo_all_blocks=1 00:32:57.104 --rc geninfo_unexecuted_blocks=1 00:32:57.104 00:32:57.104 ' 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:57.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.104 --rc genhtml_branch_coverage=1 00:32:57.104 --rc genhtml_function_coverage=1 00:32:57.104 --rc genhtml_legend=1 00:32:57.104 --rc geninfo_all_blocks=1 00:32:57.104 --rc geninfo_unexecuted_blocks=1 00:32:57.104 00:32:57.104 ' 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:57.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.104 --rc genhtml_branch_coverage=1 00:32:57.104 --rc genhtml_function_coverage=1 00:32:57.104 --rc genhtml_legend=1 00:32:57.104 --rc geninfo_all_blocks=1 00:32:57.104 --rc geninfo_unexecuted_blocks=1 00:32:57.104 00:32:57.104 ' 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:57.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.104 --rc genhtml_branch_coverage=1 00:32:57.104 --rc genhtml_function_coverage=1 00:32:57.104 --rc genhtml_legend=1 00:32:57.104 --rc geninfo_all_blocks=1 00:32:57.104 --rc geninfo_unexecuted_blocks=1 00:32:57.104 00:32:57.104 ' 00:32:57.104 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:57.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:32:57.365 18:43:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:00.749 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:00.750 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:00.750 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:00.750 Found net devices under 0000:84:00.0: cvl_0_0 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:00.750 Found net devices under 0000:84:00.1: cvl_0_1 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:00.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:00.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:33:00.750 00:33:00.750 --- 10.0.0.2 ping statistics --- 00:33:00.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.750 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:00.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:00.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:33:00.750 00:33:00.750 --- 10.0.0.1 ping statistics --- 00:33:00.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.750 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1337324 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1337324 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1337324 ']' 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:00.750 18:43:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.750 [2024-10-08 18:43:28.909504] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:33:00.750 [2024-10-08 18:43:28.909603] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.750 [2024-10-08 18:43:29.031538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:00.750 [2024-10-08 18:43:29.260150] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.750 [2024-10-08 18:43:29.260270] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.750 [2024-10-08 18:43:29.260306] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.750 [2024-10-08 18:43:29.260336] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.750 [2024-10-08 18:43:29.260361] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.750 [2024-10-08 18:43:29.264697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.750 [2024-10-08 18:43:29.264849] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:00.750 [2024-10-08 18:43:29.264855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.007 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:01.007 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:01.007 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:01.007 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:01.007 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:01.008 [2024-10-08 18:43:29.430067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:01.008 Malloc0 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:01.008 [2024-10-08 18:43:29.493491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:01.008 { 00:33:01.008 "params": { 00:33:01.008 "name": "Nvme$subsystem", 00:33:01.008 "trtype": "$TEST_TRANSPORT", 00:33:01.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:01.008 "adrfam": "ipv4", 00:33:01.008 "trsvcid": "$NVMF_PORT", 00:33:01.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:01.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:01.008 "hdgst": ${hdgst:-false}, 00:33:01.008 "ddgst": ${ddgst:-false} 00:33:01.008 }, 00:33:01.008 "method": "bdev_nvme_attach_controller" 00:33:01.008 } 00:33:01.008 EOF 00:33:01.008 )") 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:33:01.008 18:43:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:01.008 "params": { 00:33:01.008 "name": "Nvme1", 00:33:01.008 "trtype": "tcp", 00:33:01.008 "traddr": "10.0.0.2", 00:33:01.008 "adrfam": "ipv4", 00:33:01.008 "trsvcid": "4420", 00:33:01.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:01.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:01.008 "hdgst": false, 00:33:01.008 "ddgst": false 00:33:01.008 }, 00:33:01.008 "method": "bdev_nvme_attach_controller" 00:33:01.008 }' 00:33:01.265 [2024-10-08 18:43:29.546578] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:33:01.265 [2024-10-08 18:43:29.546680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337464 ] 00:33:01.265 [2024-10-08 18:43:29.615312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.265 [2024-10-08 18:43:29.732812] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.829 Running I/O for 1 seconds... 00:33:02.764 8516.00 IOPS, 33.27 MiB/s 00:33:02.764 Latency(us) 00:33:02.764 [2024-10-08T16:43:31.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.764 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:02.764 Verification LBA range: start 0x0 length 0x4000 00:33:02.764 Nvme1n1 : 1.05 8234.19 32.16 0.00 0.00 14914.49 3252.53 45438.29 00:33:02.764 [2024-10-08T16:43:31.301Z] =================================================================================================================== 00:33:02.764 [2024-10-08T16:43:31.301Z] Total : 8234.19 32.16 0.00 0.00 14914.49 3252.53 45438.29 00:33:03.023 18:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1337727 00:33:03.023 18:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:03.023 18:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:03.023 18:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:03.023 18:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:33:03.023 18:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:33:03.023 18:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:03.023 18:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:03.023 { 00:33:03.023 "params": { 00:33:03.023 "name": "Nvme$subsystem", 00:33:03.023 "trtype": "$TEST_TRANSPORT", 00:33:03.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:03.023 "adrfam": "ipv4", 00:33:03.023 "trsvcid": "$NVMF_PORT", 00:33:03.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:03.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:03.023 "hdgst": ${hdgst:-false}, 00:33:03.023 "ddgst": ${ddgst:-false} 00:33:03.023 }, 00:33:03.023 "method": "bdev_nvme_attach_controller" 00:33:03.023 } 00:33:03.023 EOF 00:33:03.023 )") 00:33:03.023 18:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:33:03.023 18:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:33:03.023 18:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:33:03.023 18:43:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:03.023 "params": { 00:33:03.023 "name": "Nvme1", 00:33:03.023 "trtype": "tcp", 00:33:03.023 "traddr": "10.0.0.2", 00:33:03.023 "adrfam": "ipv4", 00:33:03.023 "trsvcid": "4420", 00:33:03.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:03.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:03.023 "hdgst": false, 00:33:03.023 "ddgst": false 00:33:03.023 }, 00:33:03.023 "method": "bdev_nvme_attach_controller" 00:33:03.023 }' 00:33:03.023 [2024-10-08 18:43:31.474805] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:33:03.023 [2024-10-08 18:43:31.474923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337727 ] 00:33:03.282 [2024-10-08 18:43:31.578433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.282 [2024-10-08 18:43:31.690233] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.540 Running I/O for 15 seconds... 00:33:05.407 8701.00 IOPS, 33.99 MiB/s [2024-10-08T16:43:34.513Z] 8758.00 IOPS, 34.21 MiB/s [2024-10-08T16:43:34.513Z] 18:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1337324 00:33:05.976 18:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:05.976 [2024-10-08 18:43:34.435762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.435817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.435851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.435879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.435899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.435916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.435934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.435950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.436002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.436041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.436085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.436124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.436168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.436205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.436244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.436285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.436328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.436366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.436407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.436443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.436485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.436522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.436566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.436605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.436647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.436858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.436882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.436900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.436921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.436939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.436956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.436970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.436986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.437015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.437031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.437045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.437060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.437106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.437147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.437182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.437220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.976 [2024-10-08 18:43:34.437255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.976 [2024-10-08 18:43:34.437293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.437328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.437367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.437402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.437440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.437474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.437512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.437547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.437585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.437632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.437706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.437723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.437739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.437754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.437769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.437784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.437799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.437813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.437829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.437843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.437859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.437873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.437888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.437903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.437919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.437933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.437975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.438945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.438990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.439030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.439065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.439104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.439139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.439177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.439211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.439250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.439284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.439323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.439358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.439396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.439431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.439471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.977 [2024-10-08 18:43:34.439506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.977 [2024-10-08 18:43:34.439544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.439579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.439617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.439916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.439942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.439983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.440022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.440057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.440096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.440141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.440180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.440215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.440253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.440287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.440326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.440361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.440399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.440433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.440471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.440505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.440543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.440577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.440614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.440667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.440712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.440748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.440786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.440820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.440858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.440894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.440934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.440968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.441039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.441124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.441195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.441266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.441339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.441411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.441483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.441554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.441625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.441720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.441794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.441866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.441938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.441976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.442019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.442059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.442114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.442154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.442189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.442226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.442261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.442299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.442334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.442371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.442405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.442444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.442479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.442516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.442550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.442588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.442622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.442677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.442716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.442754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.442788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.978 [2024-10-08 18:43:34.442826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.978 [2024-10-08 18:43:34.442862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.442900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.979 [2024-10-08 18:43:34.442934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.442989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.979 [2024-10-08 18:43:34.443026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.443064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.979 [2024-10-08 18:43:34.443099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.443137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.979 [2024-10-08 18:43:34.443171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.443208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.979 [2024-10-08 18:43:34.443242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.443281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.979 [2024-10-08 18:43:34.443326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.443367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.979 [2024-10-08 18:43:34.443401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.443440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.979 [2024-10-08 18:43:34.443475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.443513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.979 [2024-10-08 18:43:34.443547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.443585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.979 [2024-10-08 18:43:34.443619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.443672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.979 [2024-10-08 18:43:34.443712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.443751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.443785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.443822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.443856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.443895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.443928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.443975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.444011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.444049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.444083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.444121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.444154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.444192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.444226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.444264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.444298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.444336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.444369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.444407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.444441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.444478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.444522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.444562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.444595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.444667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.444709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.444747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.444782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.444819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.979 [2024-10-08 18:43:34.444854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.444892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.979 [2024-10-08 18:43:34.444937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.444974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510d30 is same with the state(6) to be set 00:33:05.979 [2024-10-08 18:43:34.445014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:05.979 [2024-10-08 18:43:34.445041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:05.979 [2024-10-08 18:43:34.445070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47344 len:8 PRP1 0x0 PRP2 0x0 00:33:05.979 [2024-10-08 18:43:34.445102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.445228] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2510d30 was disconnected and freed. reset controller. 00:33:05.979 [2024-10-08 18:43:34.445387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.979 [2024-10-08 18:43:34.445448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.445485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.979 [2024-10-08 18:43:34.445519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.445553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.979 [2024-10-08 18:43:34.445585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.445618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:05.979 [2024-10-08 18:43:34.445683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.979 [2024-10-08 18:43:34.445735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:05.979 [2024-10-08 18:43:34.453733] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.979 [2024-10-08 18:43:34.453830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:05.979 [2024-10-08 18:43:34.455197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.979 [2024-10-08 18:43:34.455273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:05.979 [2024-10-08 18:43:34.455315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:05.979 [2024-10-08 18:43:34.455877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:05.979 [2024-10-08 18:43:34.456425] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.979 [2024-10-08 18:43:34.456478] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.979 [2024-10-08 18:43:34.456515] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.979 [2024-10-08 18:43:34.464584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.979 [2024-10-08 18:43:34.472935] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.979 [2024-10-08 18:43:34.473763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.980 [2024-10-08 18:43:34.473849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:05.980 [2024-10-08 18:43:34.473891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:05.980 [2024-10-08 18:43:34.474424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:05.980 [2024-10-08 18:43:34.474996] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.980 [2024-10-08 18:43:34.475050] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.980 [2024-10-08 18:43:34.475084] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.980 [2024-10-08 18:43:34.483121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.980 [2024-10-08 18:43:34.491708] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:05.980 [2024-10-08 18:43:34.492487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.980 [2024-10-08 18:43:34.492557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:05.980 [2024-10-08 18:43:34.492597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:05.980 [2024-10-08 18:43:34.493180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:05.980 [2024-10-08 18:43:34.493751] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:05.980 [2024-10-08 18:43:34.493806] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:05.980 [2024-10-08 18:43:34.493840] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:05.980 [2024-10-08 18:43:34.501873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:05.980 [2024-10-08 18:43:34.510603] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.256 [2024-10-08 18:43:34.511450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.256 [2024-10-08 18:43:34.511522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.256 [2024-10-08 18:43:34.511562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.256 [2024-10-08 18:43:34.512158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.256 [2024-10-08 18:43:34.512766] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.256 [2024-10-08 18:43:34.512830] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.256 [2024-10-08 18:43:34.512866] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.256 [2024-10-08 18:43:34.521041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.256 [2024-10-08 18:43:34.529583] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.256 [2024-10-08 18:43:34.530379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.256 [2024-10-08 18:43:34.530454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.256 [2024-10-08 18:43:34.530495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.256 [2024-10-08 18:43:34.531056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.256 [2024-10-08 18:43:34.531613] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.256 [2024-10-08 18:43:34.531680] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.256 [2024-10-08 18:43:34.531718] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.256 [2024-10-08 18:43:34.539765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.256 [2024-10-08 18:43:34.548300] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.256 [2024-10-08 18:43:34.549121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.256 [2024-10-08 18:43:34.549192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.256 [2024-10-08 18:43:34.549233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.256 [2024-10-08 18:43:34.549793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.256 [2024-10-08 18:43:34.550334] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.256 [2024-10-08 18:43:34.550384] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.256 [2024-10-08 18:43:34.550417] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.256 [2024-10-08 18:43:34.558462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.256 [2024-10-08 18:43:34.567029] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.256 [2024-10-08 18:43:34.567779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.256 [2024-10-08 18:43:34.567851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.256 [2024-10-08 18:43:34.567891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.256 [2024-10-08 18:43:34.568422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.256 [2024-10-08 18:43:34.568988] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.256 [2024-10-08 18:43:34.569042] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.256 [2024-10-08 18:43:34.569075] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.256 [2024-10-08 18:43:34.577129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.256 [2024-10-08 18:43:34.586188] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.256 [2024-10-08 18:43:34.587006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.256 [2024-10-08 18:43:34.587076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.256 [2024-10-08 18:43:34.587116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.256 [2024-10-08 18:43:34.587648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.256 [2024-10-08 18:43:34.588218] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.256 [2024-10-08 18:43:34.588270] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.257 [2024-10-08 18:43:34.588304] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.257 [2024-10-08 18:43:34.596415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.257 [2024-10-08 18:43:34.604978] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.257 [2024-10-08 18:43:34.605783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.257 [2024-10-08 18:43:34.605854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.257 [2024-10-08 18:43:34.605894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.257 [2024-10-08 18:43:34.606426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.257 [2024-10-08 18:43:34.606991] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.257 [2024-10-08 18:43:34.607045] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.257 [2024-10-08 18:43:34.607078] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.257 [2024-10-08 18:43:34.615226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.257 [2024-10-08 18:43:34.623805] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.257 [2024-10-08 18:43:34.624610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.257 [2024-10-08 18:43:34.624700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.257 [2024-10-08 18:43:34.624744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.257 [2024-10-08 18:43:34.625279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.257 [2024-10-08 18:43:34.625845] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.257 [2024-10-08 18:43:34.625898] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.257 [2024-10-08 18:43:34.625931] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.257 [2024-10-08 18:43:34.633971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.257 [2024-10-08 18:43:34.642522] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.257 [2024-10-08 18:43:34.643346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.257 [2024-10-08 18:43:34.643417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.257 [2024-10-08 18:43:34.643457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.257 [2024-10-08 18:43:34.644016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.257 [2024-10-08 18:43:34.644562] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.257 [2024-10-08 18:43:34.644614] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.257 [2024-10-08 18:43:34.644647] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.257 [2024-10-08 18:43:34.652712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.257 [2024-10-08 18:43:34.661252] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.257 [2024-10-08 18:43:34.662066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.257 [2024-10-08 18:43:34.662138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.257 [2024-10-08 18:43:34.662191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.257 [2024-10-08 18:43:34.662751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.257 [2024-10-08 18:43:34.663296] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.257 [2024-10-08 18:43:34.663348] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.257 [2024-10-08 18:43:34.663382] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.257 [2024-10-08 18:43:34.671421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.257 [2024-10-08 18:43:34.679981] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.257 [2024-10-08 18:43:34.680806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.257 [2024-10-08 18:43:34.680879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.257 [2024-10-08 18:43:34.680919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.257 [2024-10-08 18:43:34.681452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.257 [2024-10-08 18:43:34.682015] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.257 [2024-10-08 18:43:34.682068] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.257 [2024-10-08 18:43:34.682102] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.257 [2024-10-08 18:43:34.690159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.257 [2024-10-08 18:43:34.698781] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.257 [2024-10-08 18:43:34.699538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.257 [2024-10-08 18:43:34.699609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.257 [2024-10-08 18:43:34.699668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.257 [2024-10-08 18:43:34.700206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.257 [2024-10-08 18:43:34.700770] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.257 [2024-10-08 18:43:34.700824] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.257 [2024-10-08 18:43:34.700859] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.257 [2024-10-08 18:43:34.708912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.257 [2024-10-08 18:43:34.717475] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.257 [2024-10-08 18:43:34.718270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.257 [2024-10-08 18:43:34.718340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.257 [2024-10-08 18:43:34.718381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.257 [2024-10-08 18:43:34.718937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.257 [2024-10-08 18:43:34.719480] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.257 [2024-10-08 18:43:34.719544] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.257 [2024-10-08 18:43:34.719581] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.257 [2024-10-08 18:43:34.727642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.257 [2024-10-08 18:43:34.736233] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.257 [2024-10-08 18:43:34.736973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.257 [2024-10-08 18:43:34.737044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.257 [2024-10-08 18:43:34.737084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.257 [2024-10-08 18:43:34.737617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.257 [2024-10-08 18:43:34.738181] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.257 [2024-10-08 18:43:34.738234] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.257 [2024-10-08 18:43:34.738268] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.257 [2024-10-08 18:43:34.743722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.257 [2024-10-08 18:43:34.750681] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.257 [2024-10-08 18:43:34.751448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.257 [2024-10-08 18:43:34.751518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.257 [2024-10-08 18:43:34.751558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.257 [2024-10-08 18:43:34.752111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.257 [2024-10-08 18:43:34.752673] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.257 [2024-10-08 18:43:34.752726] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.257 [2024-10-08 18:43:34.752760] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.257 [2024-10-08 18:43:34.760822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.257 [2024-10-08 18:43:34.769377] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.257 [2024-10-08 18:43:34.770203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.257 [2024-10-08 18:43:34.770273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.257 [2024-10-08 18:43:34.770313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.257 [2024-10-08 18:43:34.770627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.257 [2024-10-08 18:43:34.770876] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.257 [2024-10-08 18:43:34.770899] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.257 [2024-10-08 18:43:34.770915] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.258 [2024-10-08 18:43:34.775261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.258 [2024-10-08 18:43:34.784345] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.258 [2024-10-08 18:43:34.784761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.258 [2024-10-08 18:43:34.784794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.258 [2024-10-08 18:43:34.784812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.258 [2024-10-08 18:43:34.785050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.258 [2024-10-08 18:43:34.785290] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.258 [2024-10-08 18:43:34.785313] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.258 [2024-10-08 18:43:34.785328] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.258 [2024-10-08 18:43:34.790891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.518 [2024-10-08 18:43:34.800242] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.518 [2024-10-08 18:43:34.800716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.518 [2024-10-08 18:43:34.800749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.518 [2024-10-08 18:43:34.800767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.518 [2024-10-08 18:43:34.801139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.518 [2024-10-08 18:43:34.801704] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.518 [2024-10-08 18:43:34.801727] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.518 [2024-10-08 18:43:34.801742] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.519 [2024-10-08 18:43:34.806376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.519 [2024-10-08 18:43:34.815120] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.519 [2024-10-08 18:43:34.815586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.519 [2024-10-08 18:43:34.815624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.519 [2024-10-08 18:43:34.815646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.519 [2024-10-08 18:43:34.815919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.519 [2024-10-08 18:43:34.816159] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.519 [2024-10-08 18:43:34.816182] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.519 [2024-10-08 18:43:34.816197] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.519 [2024-10-08 18:43:34.822782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.519 [2024-10-08 18:43:34.831835] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.519 [2024-10-08 18:43:34.832243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.519 [2024-10-08 18:43:34.832274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.519 [2024-10-08 18:43:34.832292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.519 [2024-10-08 18:43:34.832648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.519 [2024-10-08 18:43:34.833040] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.519 [2024-10-08 18:43:34.833093] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.519 [2024-10-08 18:43:34.833127] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.519 [2024-10-08 18:43:34.837836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.519 [2024-10-08 18:43:34.848800] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.519 [2024-10-08 18:43:34.849408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.519 [2024-10-08 18:43:34.849478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.519 [2024-10-08 18:43:34.849518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.519 [2024-10-08 18:43:34.849884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.519 [2024-10-08 18:43:34.850405] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.519 [2024-10-08 18:43:34.850458] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.519 [2024-10-08 18:43:34.850493] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.519 [2024-10-08 18:43:34.856851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.519 [2024-10-08 18:43:34.866162] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.519 [2024-10-08 18:43:34.866888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.519 [2024-10-08 18:43:34.866943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.519 [2024-10-08 18:43:34.866985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.519 [2024-10-08 18:43:34.867518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.519 [2024-10-08 18:43:34.867879] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.519 [2024-10-08 18:43:34.867903] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.519 [2024-10-08 18:43:34.867956] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.519 [2024-10-08 18:43:34.874717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.519 [2024-10-08 18:43:34.883520] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.519 [2024-10-08 18:43:34.884145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.519 [2024-10-08 18:43:34.884216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.519 [2024-10-08 18:43:34.884257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.519 [2024-10-08 18:43:34.884756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.519 [2024-10-08 18:43:34.885161] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.519 [2024-10-08 18:43:34.885213] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.519 [2024-10-08 18:43:34.885261] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.519 [2024-10-08 18:43:34.892729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.519 7429.00 IOPS, 29.02 MiB/s [2024-10-08T16:43:35.056Z] [2024-10-08 18:43:34.904314] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:06.519 [2024-10-08 18:43:34.906235] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.519 [2024-10-08 18:43:34.907041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.519 [2024-10-08 18:43:34.907114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.519 [2024-10-08 18:43:34.907155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.519 [2024-10-08 18:43:34.907714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.519 [2024-10-08 18:43:34.908257] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.519 [2024-10-08 18:43:34.908308] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.519 [2024-10-08 18:43:34.908342] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.519 [2024-10-08 18:43:34.916383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.519 [2024-10-08 18:43:34.924949] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.519 [2024-10-08 18:43:34.925722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.519 [2024-10-08 18:43:34.925793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.519 [2024-10-08 18:43:34.925834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.519 [2024-10-08 18:43:34.926368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.519 [2024-10-08 18:43:34.926939] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.519 [2024-10-08 18:43:34.926993] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.519 [2024-10-08 18:43:34.927026] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.519 [2024-10-08 18:43:34.935071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.519 [2024-10-08 18:43:34.943608] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.519 [2024-10-08 18:43:34.944415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.519 [2024-10-08 18:43:34.944484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.519 [2024-10-08 18:43:34.944524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.519 [2024-10-08 18:43:34.945079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.519 [2024-10-08 18:43:34.945621] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.519 [2024-10-08 18:43:34.945691] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.519 [2024-10-08 18:43:34.945728] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.519 [2024-10-08 18:43:34.953782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.519 [2024-10-08 18:43:34.962343] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.519 [2024-10-08 18:43:34.963153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.519 [2024-10-08 18:43:34.963223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.519 [2024-10-08 18:43:34.963265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.519 [2024-10-08 18:43:34.963822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.519 [2024-10-08 18:43:34.964366] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.519 [2024-10-08 18:43:34.964418] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.519 [2024-10-08 18:43:34.964452] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.519 [2024-10-08 18:43:34.972501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.519 [2024-10-08 18:43:34.981047] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.519 [2024-10-08 18:43:34.981828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.519 [2024-10-08 18:43:34.981899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.519 [2024-10-08 18:43:34.981939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.519 [2024-10-08 18:43:34.982484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.519 [2024-10-08 18:43:34.983063] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.519 [2024-10-08 18:43:34.983117] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.519 [2024-10-08 18:43:34.983150] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.519 [2024-10-08 18:43:34.991199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.519 [2024-10-08 18:43:34.999835] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.519 [2024-10-08 18:43:35.000607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.519 [2024-10-08 18:43:35.000694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.520 [2024-10-08 18:43:35.000739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.520 [2024-10-08 18:43:35.001272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.520 [2024-10-08 18:43:35.001839] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.520 [2024-10-08 18:43:35.001892] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.520 [2024-10-08 18:43:35.001926] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.520 [2024-10-08 18:43:35.009963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.520 [2024-10-08 18:43:35.018506] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.520 [2024-10-08 18:43:35.019318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.520 [2024-10-08 18:43:35.019390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.520 [2024-10-08 18:43:35.019442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.520 [2024-10-08 18:43:35.020002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.520 [2024-10-08 18:43:35.020550] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.520 [2024-10-08 18:43:35.020603] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.520 [2024-10-08 18:43:35.020636] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.520 [2024-10-08 18:43:35.028698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.520 [2024-10-08 18:43:35.037234] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.520 [2024-10-08 18:43:35.038016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.520 [2024-10-08 18:43:35.038086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.520 [2024-10-08 18:43:35.038126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.520 [2024-10-08 18:43:35.038681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.520 [2024-10-08 18:43:35.039223] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.520 [2024-10-08 18:43:35.039275] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.520 [2024-10-08 18:43:35.039309] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.520 [2024-10-08 18:43:35.047346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.779 [2024-10-08 18:43:35.056165] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.779 [2024-10-08 18:43:35.057003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.779 [2024-10-08 18:43:35.057076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.779 [2024-10-08 18:43:35.057118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.779 [2024-10-08 18:43:35.057675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.779 [2024-10-08 18:43:35.058221] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.779 [2024-10-08 18:43:35.058273] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.779 [2024-10-08 18:43:35.058307] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.779 [2024-10-08 18:43:35.066447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.779 [2024-10-08 18:43:35.075012] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.779 [2024-10-08 18:43:35.075804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.779 [2024-10-08 18:43:35.075876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.779 [2024-10-08 18:43:35.075917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.779 [2024-10-08 18:43:35.076452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.779 [2024-10-08 18:43:35.077018] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.779 [2024-10-08 18:43:35.077087] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.779 [2024-10-08 18:43:35.077123] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.779 [2024-10-08 18:43:35.085172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.779 [2024-10-08 18:43:35.093753] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.779 [2024-10-08 18:43:35.094550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.779 [2024-10-08 18:43:35.094621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.779 [2024-10-08 18:43:35.094680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.779 [2024-10-08 18:43:35.095219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.779 [2024-10-08 18:43:35.095809] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.779 [2024-10-08 18:43:35.095864] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.779 [2024-10-08 18:43:35.095899] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.779 [2024-10-08 18:43:35.103938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.779 [2024-10-08 18:43:35.112485] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.779 [2024-10-08 18:43:35.113268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.779 [2024-10-08 18:43:35.113337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.779 [2024-10-08 18:43:35.113377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.779 [2024-10-08 18:43:35.113936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.779 [2024-10-08 18:43:35.114478] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.779 [2024-10-08 18:43:35.114530] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.779 [2024-10-08 18:43:35.114563] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.779 [2024-10-08 18:43:35.122613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.779 [2024-10-08 18:43:35.131168] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.779 [2024-10-08 18:43:35.131950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.780 [2024-10-08 18:43:35.132019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.780 [2024-10-08 18:43:35.132060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.780 [2024-10-08 18:43:35.132592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.780 [2024-10-08 18:43:35.133154] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.780 [2024-10-08 18:43:35.133207] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.780 [2024-10-08 18:43:35.133242] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.780 [2024-10-08 18:43:35.141362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.780 [2024-10-08 18:43:35.149917] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.780 [2024-10-08 18:43:35.150736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.780 [2024-10-08 18:43:35.150808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.780 [2024-10-08 18:43:35.150849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.780 [2024-10-08 18:43:35.151384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.780 [2024-10-08 18:43:35.151945] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.780 [2024-10-08 18:43:35.151998] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.780 [2024-10-08 18:43:35.152032] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.780 [2024-10-08 18:43:35.160082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.780 [2024-10-08 18:43:35.168615] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.780 [2024-10-08 18:43:35.169431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.780 [2024-10-08 18:43:35.169501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.780 [2024-10-08 18:43:35.169541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.780 [2024-10-08 18:43:35.170098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.780 [2024-10-08 18:43:35.170640] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.780 [2024-10-08 18:43:35.170711] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.780 [2024-10-08 18:43:35.170745] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.780 [2024-10-08 18:43:35.178795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.780 [2024-10-08 18:43:35.187332] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.780 [2024-10-08 18:43:35.188112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.780 [2024-10-08 18:43:35.188183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.780 [2024-10-08 18:43:35.188223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.780 [2024-10-08 18:43:35.188782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.780 [2024-10-08 18:43:35.189325] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.780 [2024-10-08 18:43:35.189377] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.780 [2024-10-08 18:43:35.189410] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.780 [2024-10-08 18:43:35.197499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.780 [2024-10-08 18:43:35.206081] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.780 [2024-10-08 18:43:35.206836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.780 [2024-10-08 18:43:35.206907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.780 [2024-10-08 18:43:35.206946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.780 [2024-10-08 18:43:35.207493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.780 [2024-10-08 18:43:35.208055] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.780 [2024-10-08 18:43:35.208110] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.780 [2024-10-08 18:43:35.208144] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.780 [2024-10-08 18:43:35.216209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.780 [2024-10-08 18:43:35.224772] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.780 [2024-10-08 18:43:35.225540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.780 [2024-10-08 18:43:35.225609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.780 [2024-10-08 18:43:35.225674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.780 [2024-10-08 18:43:35.226222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.780 [2024-10-08 18:43:35.226782] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.780 [2024-10-08 18:43:35.226835] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.780 [2024-10-08 18:43:35.226869] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.780 [2024-10-08 18:43:35.234911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.780 [2024-10-08 18:43:35.243454] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.780 [2024-10-08 18:43:35.244246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.780 [2024-10-08 18:43:35.244317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.780 [2024-10-08 18:43:35.244358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.780 [2024-10-08 18:43:35.244917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.780 [2024-10-08 18:43:35.245459] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.780 [2024-10-08 18:43:35.245511] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.780 [2024-10-08 18:43:35.245544] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.780 [2024-10-08 18:43:35.253587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.780 [2024-10-08 18:43:35.262140] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.780 [2024-10-08 18:43:35.262922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.780 [2024-10-08 18:43:35.262992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.780 [2024-10-08 18:43:35.263032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.780 [2024-10-08 18:43:35.263566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.780 [2024-10-08 18:43:35.264129] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.780 [2024-10-08 18:43:35.264182] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.780 [2024-10-08 18:43:35.264230] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.780 [2024-10-08 18:43:35.272272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.780 [2024-10-08 18:43:35.280833] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.780 [2024-10-08 18:43:35.281624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.780 [2024-10-08 18:43:35.281713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.780 [2024-10-08 18:43:35.281755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.780 [2024-10-08 18:43:35.282288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.780 [2024-10-08 18:43:35.282849] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.780 [2024-10-08 18:43:35.282903] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.780 [2024-10-08 18:43:35.282937] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.780 [2024-10-08 18:43:35.290980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:06.780 [2024-10-08 18:43:35.299610] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.780 [2024-10-08 18:43:35.300424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.780 [2024-10-08 18:43:35.300495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:06.780 [2024-10-08 18:43:35.300534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:06.780 [2024-10-08 18:43:35.301090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:06.780 [2024-10-08 18:43:35.301635] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:06.780 [2024-10-08 18:43:35.301702] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:06.780 [2024-10-08 18:43:35.301738] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:06.780 [2024-10-08 18:43:35.309774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.040 [2024-10-08 18:43:35.318565] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.040 [2024-10-08 18:43:35.319387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-10-08 18:43:35.319461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.040 [2024-10-08 18:43:35.319503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.040 [2024-10-08 18:43:35.320108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.040 [2024-10-08 18:43:35.320690] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.040 [2024-10-08 18:43:35.320745] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.040 [2024-10-08 18:43:35.320779] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.040 [2024-10-08 18:43:35.328824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.040 [2024-10-08 18:43:35.337371] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.040 [2024-10-08 18:43:35.338122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-10-08 18:43:35.338205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.040 [2024-10-08 18:43:35.338249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.040 [2024-10-08 18:43:35.338813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.040 [2024-10-08 18:43:35.339356] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.040 [2024-10-08 18:43:35.339407] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.040 [2024-10-08 18:43:35.339442] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.040 [2024-10-08 18:43:35.347488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.040 [2024-10-08 18:43:35.356062] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.040 [2024-10-08 18:43:35.356875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-10-08 18:43:35.356946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.040 [2024-10-08 18:43:35.356986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.040 [2024-10-08 18:43:35.357521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.040 [2024-10-08 18:43:35.358086] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.040 [2024-10-08 18:43:35.358139] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.040 [2024-10-08 18:43:35.358174] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.040 [2024-10-08 18:43:35.366226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.040 [2024-10-08 18:43:35.374785] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.040 [2024-10-08 18:43:35.375582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-10-08 18:43:35.375673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.040 [2024-10-08 18:43:35.375720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.040 [2024-10-08 18:43:35.376254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.040 [2024-10-08 18:43:35.376811] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.040 [2024-10-08 18:43:35.376865] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.040 [2024-10-08 18:43:35.376899] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.040 [2024-10-08 18:43:35.384984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.040 [2024-10-08 18:43:35.393567] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.040 [2024-10-08 18:43:35.394358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-10-08 18:43:35.394428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.040 [2024-10-08 18:43:35.394468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.040 [2024-10-08 18:43:35.395023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.040 [2024-10-08 18:43:35.395588] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.040 [2024-10-08 18:43:35.395641] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.040 [2024-10-08 18:43:35.395709] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.040 [2024-10-08 18:43:35.403861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.040 [2024-10-08 18:43:35.412424] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.040 [2024-10-08 18:43:35.413273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-10-08 18:43:35.413345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.040 [2024-10-08 18:43:35.413387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.040 [2024-10-08 18:43:35.413946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.040 [2024-10-08 18:43:35.414491] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.040 [2024-10-08 18:43:35.414543] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.040 [2024-10-08 18:43:35.414577] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.040 [2024-10-08 18:43:35.422620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.040 [2024-10-08 18:43:35.431171] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.040 [2024-10-08 18:43:35.431991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-10-08 18:43:35.432063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.040 [2024-10-08 18:43:35.432103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.040 [2024-10-08 18:43:35.432637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.040 [2024-10-08 18:43:35.433204] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.040 [2024-10-08 18:43:35.433255] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.040 [2024-10-08 18:43:35.433289] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.040 [2024-10-08 18:43:35.441329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.040 [2024-10-08 18:43:35.449898] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.040 [2024-10-08 18:43:35.450698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.040 [2024-10-08 18:43:35.450771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.040 [2024-10-08 18:43:35.450811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.041 [2024-10-08 18:43:35.451344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.041 [2024-10-08 18:43:35.451905] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.041 [2024-10-08 18:43:35.451958] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.041 [2024-10-08 18:43:35.451992] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.041 [2024-10-08 18:43:35.460051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.041 [2024-10-08 18:43:35.468873] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.041 [2024-10-08 18:43:35.469686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-10-08 18:43:35.469757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.041 [2024-10-08 18:43:35.469797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.041 [2024-10-08 18:43:35.470331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.041 [2024-10-08 18:43:35.470901] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.041 [2024-10-08 18:43:35.470955] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.041 [2024-10-08 18:43:35.470989] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.041 [2024-10-08 18:43:35.478856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.041 [2024-10-08 18:43:35.487874] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.041 [2024-10-08 18:43:35.488647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-10-08 18:43:35.488734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.041 [2024-10-08 18:43:35.488775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.041 [2024-10-08 18:43:35.489308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.041 [2024-10-08 18:43:35.489868] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.041 [2024-10-08 18:43:35.489922] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.041 [2024-10-08 18:43:35.489957] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.041 [2024-10-08 18:43:35.498048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.041 [2024-10-08 18:43:35.506590] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.041 [2024-10-08 18:43:35.507404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-10-08 18:43:35.507475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.041 [2024-10-08 18:43:35.507516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.041 [2024-10-08 18:43:35.508075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.041 [2024-10-08 18:43:35.508617] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.041 [2024-10-08 18:43:35.508687] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.041 [2024-10-08 18:43:35.508744] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.041 [2024-10-08 18:43:35.516814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.041 [2024-10-08 18:43:35.525350] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.041 [2024-10-08 18:43:35.526152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-10-08 18:43:35.526223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.041 [2024-10-08 18:43:35.526276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.041 [2024-10-08 18:43:35.526839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.041 [2024-10-08 18:43:35.527380] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.041 [2024-10-08 18:43:35.527431] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.041 [2024-10-08 18:43:35.527465] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.041 [2024-10-08 18:43:35.535507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.041 [2024-10-08 18:43:35.544066] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.041 [2024-10-08 18:43:35.544869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-10-08 18:43:35.544939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.041 [2024-10-08 18:43:35.544980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.041 [2024-10-08 18:43:35.545512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.041 [2024-10-08 18:43:35.546077] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.041 [2024-10-08 18:43:35.546130] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.041 [2024-10-08 18:43:35.546164] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.041 [2024-10-08 18:43:35.554207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.041 [2024-10-08 18:43:35.562764] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.041 [2024-10-08 18:43:35.563562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.041 [2024-10-08 18:43:35.563630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.041 [2024-10-08 18:43:35.563701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.041 [2024-10-08 18:43:35.564238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.041 [2024-10-08 18:43:35.564796] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.041 [2024-10-08 18:43:35.564849] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.041 [2024-10-08 18:43:35.564883] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.041 [2024-10-08 18:43:35.573003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.301 [2024-10-08 18:43:35.581774] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.301 [2024-10-08 18:43:35.582575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.301 [2024-10-08 18:43:35.582646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.301 [2024-10-08 18:43:35.582713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.301 [2024-10-08 18:43:35.583248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.301 [2024-10-08 18:43:35.583813] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.301 [2024-10-08 18:43:35.583879] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.301 [2024-10-08 18:43:35.583915] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.301 [2024-10-08 18:43:35.592014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.301 [2024-10-08 18:43:35.600602] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.301 [2024-10-08 18:43:35.601419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.301 [2024-10-08 18:43:35.601491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.301 [2024-10-08 18:43:35.601531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.301 [2024-10-08 18:43:35.602089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.301 [2024-10-08 18:43:35.602633] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.301 [2024-10-08 18:43:35.602703] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.301 [2024-10-08 18:43:35.602737] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.301 [2024-10-08 18:43:35.610779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.301 [2024-10-08 18:43:35.619375] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.301 [2024-10-08 18:43:35.620191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.301 [2024-10-08 18:43:35.620264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.301 [2024-10-08 18:43:35.620304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.301 [2024-10-08 18:43:35.620863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.301 [2024-10-08 18:43:35.621411] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.301 [2024-10-08 18:43:35.621462] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.301 [2024-10-08 18:43:35.621495] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.301 [2024-10-08 18:43:35.629540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.301 [2024-10-08 18:43:35.638097] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.301 [2024-10-08 18:43:35.638906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.301 [2024-10-08 18:43:35.638977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.301 [2024-10-08 18:43:35.639017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.301 [2024-10-08 18:43:35.639551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.301 [2024-10-08 18:43:35.640117] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.301 [2024-10-08 18:43:35.640172] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.301 [2024-10-08 18:43:35.640206] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.301 [2024-10-08 18:43:35.648249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.301 [2024-10-08 18:43:35.656815] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.301 [2024-10-08 18:43:35.657612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.301 [2024-10-08 18:43:35.657699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.301 [2024-10-08 18:43:35.657742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.301 [2024-10-08 18:43:35.658275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.301 [2024-10-08 18:43:35.658836] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.301 [2024-10-08 18:43:35.658889] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.301 [2024-10-08 18:43:35.658924] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.301 [2024-10-08 18:43:35.666965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.301 [2024-10-08 18:43:35.675507] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.301 [2024-10-08 18:43:35.676287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.301 [2024-10-08 18:43:35.676358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.301 [2024-10-08 18:43:35.676398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.301 [2024-10-08 18:43:35.676957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.301 [2024-10-08 18:43:35.677499] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.301 [2024-10-08 18:43:35.677550] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.301 [2024-10-08 18:43:35.677583] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.301 [2024-10-08 18:43:35.685622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.301 [2024-10-08 18:43:35.694188] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.302 [2024-10-08 18:43:35.694995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.302 [2024-10-08 18:43:35.695066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.302 [2024-10-08 18:43:35.695107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.302 [2024-10-08 18:43:35.695639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.302 [2024-10-08 18:43:35.696204] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.302 [2024-10-08 18:43:35.696257] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.302 [2024-10-08 18:43:35.696290] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.302 [2024-10-08 18:43:35.704384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.302 [2024-10-08 18:43:35.712960] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.302 [2024-10-08 18:43:35.713754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.302 [2024-10-08 18:43:35.713826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.302 [2024-10-08 18:43:35.713866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.302 [2024-10-08 18:43:35.714414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.302 [2024-10-08 18:43:35.714973] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.302 [2024-10-08 18:43:35.715025] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.302 [2024-10-08 18:43:35.715059] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.302 [2024-10-08 18:43:35.723108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.302 [2024-10-08 18:43:35.731678] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.302 [2024-10-08 18:43:35.732446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.302 [2024-10-08 18:43:35.732517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.302 [2024-10-08 18:43:35.732556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.302 [2024-10-08 18:43:35.733116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.302 [2024-10-08 18:43:35.733676] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.302 [2024-10-08 18:43:35.733730] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.302 [2024-10-08 18:43:35.733764] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.302 [2024-10-08 18:43:35.741811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.302 [2024-10-08 18:43:35.750345] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.302 [2024-10-08 18:43:35.751150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.302 [2024-10-08 18:43:35.751220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.302 [2024-10-08 18:43:35.751259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.302 [2024-10-08 18:43:35.751817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.302 [2024-10-08 18:43:35.752357] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.302 [2024-10-08 18:43:35.752408] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.302 [2024-10-08 18:43:35.752443] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.302 [2024-10-08 18:43:35.760491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.302 [2024-10-08 18:43:35.769051] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.302 [2024-10-08 18:43:35.769864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.302 [2024-10-08 18:43:35.769934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.302 [2024-10-08 18:43:35.769975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.302 [2024-10-08 18:43:35.770508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.302 [2024-10-08 18:43:35.771072] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.302 [2024-10-08 18:43:35.771125] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.302 [2024-10-08 18:43:35.771172] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.302 [2024-10-08 18:43:35.779222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.302 [2024-10-08 18:43:35.787781] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.302 [2024-10-08 18:43:35.788583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.302 [2024-10-08 18:43:35.788673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.302 [2024-10-08 18:43:35.788717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.302 [2024-10-08 18:43:35.789250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.302 [2024-10-08 18:43:35.789810] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.302 [2024-10-08 18:43:35.789863] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.302 [2024-10-08 18:43:35.789896] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.302 [2024-10-08 18:43:35.797949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.302 [2024-10-08 18:43:35.806517] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.302 [2024-10-08 18:43:35.807301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.302 [2024-10-08 18:43:35.807371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.302 [2024-10-08 18:43:35.807412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.302 [2024-10-08 18:43:35.807970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.302 [2024-10-08 18:43:35.808511] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.302 [2024-10-08 18:43:35.808562] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.302 [2024-10-08 18:43:35.808595] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.302 [2024-10-08 18:43:35.816637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.302 [2024-10-08 18:43:35.825188] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.302 [2024-10-08 18:43:35.826022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.302 [2024-10-08 18:43:35.826092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.302 [2024-10-08 18:43:35.826132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.302 [2024-10-08 18:43:35.826686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.302 [2024-10-08 18:43:35.827227] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.302 [2024-10-08 18:43:35.827280] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.302 [2024-10-08 18:43:35.827313] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.302 [2024-10-08 18:43:35.835462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.562 [2024-10-08 18:43:35.844294] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.562 [2024-10-08 18:43:35.845157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.562 [2024-10-08 18:43:35.845230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.562 [2024-10-08 18:43:35.845271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.562 [2024-10-08 18:43:35.845830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.562 [2024-10-08 18:43:35.846377] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.562 [2024-10-08 18:43:35.846429] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.562 [2024-10-08 18:43:35.846465] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.562 [2024-10-08 18:43:35.854511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.562 [2024-10-08 18:43:35.862542] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.562 [2024-10-08 18:43:35.863340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.562 [2024-10-08 18:43:35.863412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.562 [2024-10-08 18:43:35.863454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.562 [2024-10-08 18:43:35.864011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.563 [2024-10-08 18:43:35.864554] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.563 [2024-10-08 18:43:35.864606] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.563 [2024-10-08 18:43:35.864641] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.563 [2024-10-08 18:43:35.872715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.563 [2024-10-08 18:43:35.881268] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.563 [2024-10-08 18:43:35.882077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.563 [2024-10-08 18:43:35.882148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.563 [2024-10-08 18:43:35.882189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.563 [2024-10-08 18:43:35.882754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.563 [2024-10-08 18:43:35.883300] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.563 [2024-10-08 18:43:35.883351] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.563 [2024-10-08 18:43:35.883385] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.563 [2024-10-08 18:43:35.891427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.563 [2024-10-08 18:43:35.900059] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.563 [2024-10-08 18:43:35.900820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.563 [2024-10-08 18:43:35.900891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.563 [2024-10-08 18:43:35.900933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.563 [2024-10-08 18:43:35.901479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.563 [2024-10-08 18:43:35.902046] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.563 [2024-10-08 18:43:35.902101] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.563 [2024-10-08 18:43:35.902136] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.563 5571.75 IOPS, 21.76 MiB/s [2024-10-08T16:43:36.100Z] [2024-10-08 18:43:35.910358] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:07.563 [2024-10-08 18:43:35.914176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.563 [2024-10-08 18:43:35.929173] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.563 [2024-10-08 18:43:35.929923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.563 [2024-10-08 18:43:35.929994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.563 [2024-10-08 18:43:35.930034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.563 [2024-10-08 18:43:35.930303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.563 [2024-10-08 18:43:35.930686] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.563 [2024-10-08 18:43:35.930741] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.563 [2024-10-08 18:43:35.930774] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.563 [2024-10-08 18:43:35.938588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.563 [2024-10-08 18:43:35.948114] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.563 [2024-10-08 18:43:35.948877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.563 [2024-10-08 18:43:35.948947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.563 [2024-10-08 18:43:35.948987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.563 [2024-10-08 18:43:35.949519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.563 [2024-10-08 18:43:35.950082] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.563 [2024-10-08 18:43:35.950135] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.563 [2024-10-08 18:43:35.950170] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.563 [2024-10-08 18:43:35.958218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.563 [2024-10-08 18:43:35.967270] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.563 [2024-10-08 18:43:35.968048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.563 [2024-10-08 18:43:35.968118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.563 [2024-10-08 18:43:35.968158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.563 [2024-10-08 18:43:35.968706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.563 [2024-10-08 18:43:35.969249] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.563 [2024-10-08 18:43:35.969321] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.563 [2024-10-08 18:43:35.969358] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.563 [2024-10-08 18:43:35.977419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.563 [2024-10-08 18:43:35.985997] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.563 [2024-10-08 18:43:35.986811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.563 [2024-10-08 18:43:35.986882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.563 [2024-10-08 18:43:35.986923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.563 [2024-10-08 18:43:35.987457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.563 [2024-10-08 18:43:35.988018] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.563 [2024-10-08 18:43:35.988070] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.563 [2024-10-08 18:43:35.988104] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.563 [2024-10-08 18:43:35.996170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.563 [2024-10-08 18:43:36.004791] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.563 [2024-10-08 18:43:36.005574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.563 [2024-10-08 18:43:36.005644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.563 [2024-10-08 18:43:36.005707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.563 [2024-10-08 18:43:36.006242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.563 [2024-10-08 18:43:36.006805] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.563 [2024-10-08 18:43:36.006858] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.563 [2024-10-08 18:43:36.006892] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.563 [2024-10-08 18:43:36.014948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.563 [2024-10-08 18:43:36.023508] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.563 [2024-10-08 18:43:36.024289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.563 [2024-10-08 18:43:36.024360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.563 [2024-10-08 18:43:36.024401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.563 [2024-10-08 18:43:36.024955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.563 [2024-10-08 18:43:36.025501] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.563 [2024-10-08 18:43:36.025554] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.563 [2024-10-08 18:43:36.025588] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.563 [2024-10-08 18:43:36.033629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.563 [2024-10-08 18:43:36.042205] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.563 [2024-10-08 18:43:36.042964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.563 [2024-10-08 18:43:36.043034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.563 [2024-10-08 18:43:36.043074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.563 [2024-10-08 18:43:36.043607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.563 [2024-10-08 18:43:36.044171] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.563 [2024-10-08 18:43:36.044224] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.563 [2024-10-08 18:43:36.044259] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.563 [2024-10-08 18:43:36.052325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.563 [2024-10-08 18:43:36.060905] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.563 [2024-10-08 18:43:36.061713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.563 [2024-10-08 18:43:36.061785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.563 [2024-10-08 18:43:36.061827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.563 [2024-10-08 18:43:36.062359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.563 [2024-10-08 18:43:36.062919] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.564 [2024-10-08 18:43:36.062984] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.564 [2024-10-08 18:43:36.063017] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.564 [2024-10-08 18:43:36.071067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.564 [2024-10-08 18:43:36.079620] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.564 [2024-10-08 18:43:36.080432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.564 [2024-10-08 18:43:36.080502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.564 [2024-10-08 18:43:36.080543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.564 [2024-10-08 18:43:36.081099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.564 [2024-10-08 18:43:36.081641] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.564 [2024-10-08 18:43:36.081710] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.564 [2024-10-08 18:43:36.081745] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.564 [2024-10-08 18:43:36.089781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.564 [2024-10-08 18:43:36.098525] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.825 [2024-10-08 18:43:36.099349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.825 [2024-10-08 18:43:36.099424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.825 [2024-10-08 18:43:36.099465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.825 [2024-10-08 18:43:36.100075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.825 [2024-10-08 18:43:36.100689] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.825 [2024-10-08 18:43:36.100745] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.825 [2024-10-08 18:43:36.100780] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.825 [2024-10-08 18:43:36.108939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.825 [2024-10-08 18:43:36.117494] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.825 [2024-10-08 18:43:36.118335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.825 [2024-10-08 18:43:36.118406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.825 [2024-10-08 18:43:36.118447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.825 [2024-10-08 18:43:36.119007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.825 [2024-10-08 18:43:36.119553] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.825 [2024-10-08 18:43:36.119606] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.825 [2024-10-08 18:43:36.119640] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.825 [2024-10-08 18:43:36.127719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.825 [2024-10-08 18:43:36.136267] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.825 [2024-10-08 18:43:36.137126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.825 [2024-10-08 18:43:36.137198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.825 [2024-10-08 18:43:36.137239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.825 [2024-10-08 18:43:36.137800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.825 [2024-10-08 18:43:36.138346] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.825 [2024-10-08 18:43:36.138398] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.825 [2024-10-08 18:43:36.138433] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.825 [2024-10-08 18:43:36.146480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.825 [2024-10-08 18:43:36.155052] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.825 [2024-10-08 18:43:36.155872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.825 [2024-10-08 18:43:36.155943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.825 [2024-10-08 18:43:36.155984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.825 [2024-10-08 18:43:36.156517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.825 [2024-10-08 18:43:36.157086] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.825 [2024-10-08 18:43:36.157139] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.825 [2024-10-08 18:43:36.157188] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.825 [2024-10-08 18:43:36.165237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.825 [2024-10-08 18:43:36.173805] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.825 [2024-10-08 18:43:36.174638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.825 [2024-10-08 18:43:36.174734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.825 [2024-10-08 18:43:36.174775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.825 [2024-10-08 18:43:36.175310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.825 [2024-10-08 18:43:36.175877] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.825 [2024-10-08 18:43:36.175931] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.825 [2024-10-08 18:43:36.175965] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.825 [2024-10-08 18:43:36.184013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.825 [2024-10-08 18:43:36.192580] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.825 [2024-10-08 18:43:36.193378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.825 [2024-10-08 18:43:36.193456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.825 [2024-10-08 18:43:36.193497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.825 [2024-10-08 18:43:36.194053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.825 [2024-10-08 18:43:36.194605] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.825 [2024-10-08 18:43:36.194675] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.825 [2024-10-08 18:43:36.194713] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.825 [2024-10-08 18:43:36.202800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.825 [2024-10-08 18:43:36.211347] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.825 [2024-10-08 18:43:36.212175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.825 [2024-10-08 18:43:36.212245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.825 [2024-10-08 18:43:36.212285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.825 [2024-10-08 18:43:36.212845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.825 [2024-10-08 18:43:36.213390] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.825 [2024-10-08 18:43:36.213443] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.825 [2024-10-08 18:43:36.213477] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.825 [2024-10-08 18:43:36.221511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.825 [2024-10-08 18:43:36.230081] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.825 [2024-10-08 18:43:36.230894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.825 [2024-10-08 18:43:36.230975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.825 [2024-10-08 18:43:36.231016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.825 [2024-10-08 18:43:36.231549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.825 [2024-10-08 18:43:36.232116] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.825 [2024-10-08 18:43:36.232170] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.825 [2024-10-08 18:43:36.232203] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.825 [2024-10-08 18:43:36.240252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.825 [2024-10-08 18:43:36.248843] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.825 [2024-10-08 18:43:36.249675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.825 [2024-10-08 18:43:36.249745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.825 [2024-10-08 18:43:36.249787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.825 [2024-10-08 18:43:36.250321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.825 [2024-10-08 18:43:36.250895] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.825 [2024-10-08 18:43:36.250948] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.825 [2024-10-08 18:43:36.250982] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.825 [2024-10-08 18:43:36.259036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.825 [2024-10-08 18:43:36.267579] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.825 [2024-10-08 18:43:36.268390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.825 [2024-10-08 18:43:36.268460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.825 [2024-10-08 18:43:36.268500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.825 [2024-10-08 18:43:36.269058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.825 [2024-10-08 18:43:36.269600] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.825 [2024-10-08 18:43:36.269667] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.825 [2024-10-08 18:43:36.269705] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.825 [2024-10-08 18:43:36.277754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.825 [2024-10-08 18:43:36.286297] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.826 [2024-10-08 18:43:36.287126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.826 [2024-10-08 18:43:36.287199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.826 [2024-10-08 18:43:36.287240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.826 [2024-10-08 18:43:36.287800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.826 [2024-10-08 18:43:36.288360] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.826 [2024-10-08 18:43:36.288412] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.826 [2024-10-08 18:43:36.288446] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.826 [2024-10-08 18:43:36.296504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.826 [2024-10-08 18:43:36.305105] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.826 [2024-10-08 18:43:36.305955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.826 [2024-10-08 18:43:36.306026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.826 [2024-10-08 18:43:36.306065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.826 [2024-10-08 18:43:36.306598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.826 [2024-10-08 18:43:36.307167] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.826 [2024-10-08 18:43:36.307220] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.826 [2024-10-08 18:43:36.307255] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.826 [2024-10-08 18:43:36.315312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.826 [2024-10-08 18:43:36.323879] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.826 [2024-10-08 18:43:36.324719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.826 [2024-10-08 18:43:36.324790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.826 [2024-10-08 18:43:36.324830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.826 [2024-10-08 18:43:36.325363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.826 [2024-10-08 18:43:36.325932] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.826 [2024-10-08 18:43:36.325986] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.826 [2024-10-08 18:43:36.326019] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.826 [2024-10-08 18:43:36.334073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:07.826 [2024-10-08 18:43:36.342634] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:07.826 [2024-10-08 18:43:36.343479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.826 [2024-10-08 18:43:36.343559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:07.826 [2024-10-08 18:43:36.343599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:07.826 [2024-10-08 18:43:36.344157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:07.826 [2024-10-08 18:43:36.344724] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:07.826 [2024-10-08 18:43:36.344777] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:07.826 [2024-10-08 18:43:36.344811] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:07.826 [2024-10-08 18:43:36.352883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.087 [2024-10-08 18:43:36.361641] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.087 [2024-10-08 18:43:36.362526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-10-08 18:43:36.362599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.087 [2024-10-08 18:43:36.362640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.087 [2024-10-08 18:43:36.363200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.087 [2024-10-08 18:43:36.363769] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.087 [2024-10-08 18:43:36.363822] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.087 [2024-10-08 18:43:36.363855] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.087 [2024-10-08 18:43:36.372013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.087 [2024-10-08 18:43:36.380577] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.087 [2024-10-08 18:43:36.381397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-10-08 18:43:36.381469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.087 [2024-10-08 18:43:36.381509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.087 [2024-10-08 18:43:36.382069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.087 [2024-10-08 18:43:36.382615] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.087 [2024-10-08 18:43:36.382687] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.087 [2024-10-08 18:43:36.382723] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.087 [2024-10-08 18:43:36.390780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.087 [2024-10-08 18:43:36.399355] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.087 [2024-10-08 18:43:36.400175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-10-08 18:43:36.400246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.087 [2024-10-08 18:43:36.400287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.087 [2024-10-08 18:43:36.400843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.087 [2024-10-08 18:43:36.401392] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.087 [2024-10-08 18:43:36.401445] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.087 [2024-10-08 18:43:36.401481] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.087 [2024-10-08 18:43:36.409560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.087 [2024-10-08 18:43:36.418121] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.087 [2024-10-08 18:43:36.418899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-10-08 18:43:36.418970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.087 [2024-10-08 18:43:36.419024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.087 [2024-10-08 18:43:36.419561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.087 [2024-10-08 18:43:36.420124] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.087 [2024-10-08 18:43:36.420177] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.087 [2024-10-08 18:43:36.420211] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.087 [2024-10-08 18:43:36.428404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.087 [2024-10-08 18:43:36.436967] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.087 [2024-10-08 18:43:36.437737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-10-08 18:43:36.437808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.087 [2024-10-08 18:43:36.437849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.087 [2024-10-08 18:43:36.438384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.087 [2024-10-08 18:43:36.438947] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.087 [2024-10-08 18:43:36.439000] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.087 [2024-10-08 18:43:36.439034] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.087 [2024-10-08 18:43:36.447087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.087 [2024-10-08 18:43:36.455680] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.087 [2024-10-08 18:43:36.456448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-10-08 18:43:36.456518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.087 [2024-10-08 18:43:36.456559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.087 [2024-10-08 18:43:36.457118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.087 [2024-10-08 18:43:36.457677] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.087 [2024-10-08 18:43:36.457731] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.087 [2024-10-08 18:43:36.457765] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.087 [2024-10-08 18:43:36.465809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.087 [2024-10-08 18:43:36.474189] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.087 [2024-10-08 18:43:36.475010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-10-08 18:43:36.475080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.087 [2024-10-08 18:43:36.475120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.087 [2024-10-08 18:43:36.475674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.087 [2024-10-08 18:43:36.476216] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.087 [2024-10-08 18:43:36.476281] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.087 [2024-10-08 18:43:36.476317] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.087 [2024-10-08 18:43:36.484500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.087 [2024-10-08 18:43:36.493122] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.087 [2024-10-08 18:43:36.493956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-10-08 18:43:36.494027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.087 [2024-10-08 18:43:36.494068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.087 [2024-10-08 18:43:36.494601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.087 [2024-10-08 18:43:36.495173] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.087 [2024-10-08 18:43:36.495227] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.087 [2024-10-08 18:43:36.495262] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.087 [2024-10-08 18:43:36.503393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.087 [2024-10-08 18:43:36.511981] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.087 [2024-10-08 18:43:36.512758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.087 [2024-10-08 18:43:36.512831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.088 [2024-10-08 18:43:36.512873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.088 [2024-10-08 18:43:36.513408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.088 [2024-10-08 18:43:36.513968] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.088 [2024-10-08 18:43:36.514021] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.088 [2024-10-08 18:43:36.514054] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.088 [2024-10-08 18:43:36.522103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.088 [2024-10-08 18:43:36.530646] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.088 [2024-10-08 18:43:36.531426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-10-08 18:43:36.531496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.088 [2024-10-08 18:43:36.531538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.088 [2024-10-08 18:43:36.532104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.088 [2024-10-08 18:43:36.532646] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.088 [2024-10-08 18:43:36.532713] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.088 [2024-10-08 18:43:36.532747] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.088 [2024-10-08 18:43:36.540797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.088 [2024-10-08 18:43:36.549365] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.088 [2024-10-08 18:43:36.550179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-10-08 18:43:36.550251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.088 [2024-10-08 18:43:36.550291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.088 [2024-10-08 18:43:36.550853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.088 [2024-10-08 18:43:36.551400] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.088 [2024-10-08 18:43:36.551452] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.088 [2024-10-08 18:43:36.551486] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.088 [2024-10-08 18:43:36.559523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.088 [2024-10-08 18:43:36.568082] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.088 [2024-10-08 18:43:36.568885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-10-08 18:43:36.568955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.088 [2024-10-08 18:43:36.568994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.088 [2024-10-08 18:43:36.569527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.088 [2024-10-08 18:43:36.570091] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.088 [2024-10-08 18:43:36.570146] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.088 [2024-10-08 18:43:36.570181] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.088 [2024-10-08 18:43:36.578232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.088 [2024-10-08 18:43:36.586790] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.088 [2024-10-08 18:43:36.587610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-10-08 18:43:36.587700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.088 [2024-10-08 18:43:36.587743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.088 [2024-10-08 18:43:36.588275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.088 [2024-10-08 18:43:36.588841] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.088 [2024-10-08 18:43:36.588895] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.088 [2024-10-08 18:43:36.588928] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.088 [2024-10-08 18:43:36.596981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.088 [2024-10-08 18:43:36.605562] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.088 [2024-10-08 18:43:36.606391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.088 [2024-10-08 18:43:36.606461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.088 [2024-10-08 18:43:36.606514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.088 [2024-10-08 18:43:36.607076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.088 [2024-10-08 18:43:36.607622] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.088 [2024-10-08 18:43:36.607693] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.088 [2024-10-08 18:43:36.607729] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.088 [2024-10-08 18:43:36.615823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.347 [2024-10-08 18:43:36.624697] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.347 [2024-10-08 18:43:36.625487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.347 [2024-10-08 18:43:36.625561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.347 [2024-10-08 18:43:36.625603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.347 [2024-10-08 18:43:36.626158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.347 [2024-10-08 18:43:36.626755] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.347 [2024-10-08 18:43:36.626812] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.347 [2024-10-08 18:43:36.626847] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.347 [2024-10-08 18:43:36.634977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.347 [2024-10-08 18:43:36.643549] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.347 [2024-10-08 18:43:36.644393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.347 [2024-10-08 18:43:36.644475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.347 [2024-10-08 18:43:36.644516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.348 [2024-10-08 18:43:36.645071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.348 [2024-10-08 18:43:36.645617] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.348 [2024-10-08 18:43:36.645687] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.348 [2024-10-08 18:43:36.645723] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.348 [2024-10-08 18:43:36.653783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.348 [2024-10-08 18:43:36.662364] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.348 [2024-10-08 18:43:36.663186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.348 [2024-10-08 18:43:36.663257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.348 [2024-10-08 18:43:36.663297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.348 [2024-10-08 18:43:36.663859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.348 [2024-10-08 18:43:36.664405] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.348 [2024-10-08 18:43:36.664471] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.348 [2024-10-08 18:43:36.664508] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.348 [2024-10-08 18:43:36.672563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.348 [2024-10-08 18:43:36.681159] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.348 [2024-10-08 18:43:36.681981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.348 [2024-10-08 18:43:36.682061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.348 [2024-10-08 18:43:36.682102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.348 [2024-10-08 18:43:36.682636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.348 [2024-10-08 18:43:36.683208] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.348 [2024-10-08 18:43:36.683260] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.348 [2024-10-08 18:43:36.683294] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.348 [2024-10-08 18:43:36.691343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.348 [2024-10-08 18:43:36.699922] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.348 [2024-10-08 18:43:36.700732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.348 [2024-10-08 18:43:36.700803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.348 [2024-10-08 18:43:36.700843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.348 [2024-10-08 18:43:36.701377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.348 [2024-10-08 18:43:36.701946] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.348 [2024-10-08 18:43:36.701999] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.348 [2024-10-08 18:43:36.702033] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.348 [2024-10-08 18:43:36.710119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.348 [2024-10-08 18:43:36.718689] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.348 [2024-10-08 18:43:36.719516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.348 [2024-10-08 18:43:36.719586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.348 [2024-10-08 18:43:36.719626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.348 [2024-10-08 18:43:36.720185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.348 [2024-10-08 18:43:36.720751] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.348 [2024-10-08 18:43:36.720803] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.348 [2024-10-08 18:43:36.720837] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.348 [2024-10-08 18:43:36.728886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.348 [2024-10-08 18:43:36.737433] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.348 [2024-10-08 18:43:36.738266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.348 [2024-10-08 18:43:36.738338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.348 [2024-10-08 18:43:36.738378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.348 [2024-10-08 18:43:36.738933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.348 [2024-10-08 18:43:36.739479] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.348 [2024-10-08 18:43:36.739531] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.348 [2024-10-08 18:43:36.739565] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.348 [2024-10-08 18:43:36.747601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.348 [2024-10-08 18:43:36.756170] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.348 [2024-10-08 18:43:36.757032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.348 [2024-10-08 18:43:36.757103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.348 [2024-10-08 18:43:36.757143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.348 [2024-10-08 18:43:36.757702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.348 [2024-10-08 18:43:36.758247] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.348 [2024-10-08 18:43:36.758299] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.348 [2024-10-08 18:43:36.758334] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.348 [2024-10-08 18:43:36.766394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.348 [2024-10-08 18:43:36.774988] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.348 [2024-10-08 18:43:36.775794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.348 [2024-10-08 18:43:36.775870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.348 [2024-10-08 18:43:36.775912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.348 [2024-10-08 18:43:36.776446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.348 [2024-10-08 18:43:36.777013] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.348 [2024-10-08 18:43:36.777066] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.348 [2024-10-08 18:43:36.777099] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.348 [2024-10-08 18:43:36.785158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.348 [2024-10-08 18:43:36.793733] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.348 [2024-10-08 18:43:36.794536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.348 [2024-10-08 18:43:36.794606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.348 [2024-10-08 18:43:36.794646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.348 [2024-10-08 18:43:36.795222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.348 [2024-10-08 18:43:36.795795] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.348 [2024-10-08 18:43:36.795848] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.348 [2024-10-08 18:43:36.795882] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.348 [2024-10-08 18:43:36.803964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.348 [2024-10-08 18:43:36.812514] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.348 [2024-10-08 18:43:36.813353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.348 [2024-10-08 18:43:36.813424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.348 [2024-10-08 18:43:36.813464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.348 [2024-10-08 18:43:36.814025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.348 [2024-10-08 18:43:36.814572] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.348 [2024-10-08 18:43:36.814623] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.348 [2024-10-08 18:43:36.814676] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.348 [2024-10-08 18:43:36.822733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.348 [2024-10-08 18:43:36.831276] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.348 [2024-10-08 18:43:36.832093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.348 [2024-10-08 18:43:36.832162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.348 [2024-10-08 18:43:36.832202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.348 [2024-10-08 18:43:36.832762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.348 [2024-10-08 18:43:36.833303] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.348 [2024-10-08 18:43:36.833355] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.349 [2024-10-08 18:43:36.833389] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.349 [2024-10-08 18:43:36.841446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.349 [2024-10-08 18:43:36.850009] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.349 [2024-10-08 18:43:36.850817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.349 [2024-10-08 18:43:36.850887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.349 [2024-10-08 18:43:36.850927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.349 [2024-10-08 18:43:36.851459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.349 [2024-10-08 18:43:36.852023] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.349 [2024-10-08 18:43:36.852076] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.349 [2024-10-08 18:43:36.852128] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.349 [2024-10-08 18:43:36.860186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.349 [2024-10-08 18:43:36.868751] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.349 [2024-10-08 18:43:36.869549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.349 [2024-10-08 18:43:36.869619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.349 [2024-10-08 18:43:36.869680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.349 [2024-10-08 18:43:36.870219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.349 [2024-10-08 18:43:36.870782] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.349 [2024-10-08 18:43:36.870835] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.349 [2024-10-08 18:43:36.870868] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.349 [2024-10-08 18:43:36.878941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.608 [2024-10-08 18:43:36.887791] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.608 [2024-10-08 18:43:36.888622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.608 [2024-10-08 18:43:36.888720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.608 [2024-10-08 18:43:36.888763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.608 [2024-10-08 18:43:36.889317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.608 [2024-10-08 18:43:36.889889] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.608 [2024-10-08 18:43:36.889943] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.608 [2024-10-08 18:43:36.889987] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.608 [2024-10-08 18:43:36.898068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.608 4457.40 IOPS, 17.41 MiB/s [2024-10-08T16:43:37.145Z] [2024-10-08 18:43:36.909680] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:08.608 [2024-10-08 18:43:36.910638] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.608 [2024-10-08 18:43:36.911452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.608 [2024-10-08 18:43:36.911523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.608 [2024-10-08 18:43:36.911563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.608 [2024-10-08 18:43:36.912122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.608 [2024-10-08 18:43:36.912690] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.608 [2024-10-08 18:43:36.912743] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.608 [2024-10-08 18:43:36.912777] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.608 [2024-10-08 18:43:36.919793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.608 [2024-10-08 18:43:36.927811] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.608 [2024-10-08 18:43:36.928537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.608 [2024-10-08 18:43:36.928607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.608 [2024-10-08 18:43:36.928647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.608 [2024-10-08 18:43:36.928952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.608 [2024-10-08 18:43:36.929496] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.608 [2024-10-08 18:43:36.929547] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.608 [2024-10-08 18:43:36.929580] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.608 [2024-10-08 18:43:36.937115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.608 [2024-10-08 18:43:36.946621] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.608 [2024-10-08 18:43:36.947449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.608 [2024-10-08 18:43:36.947520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.608 [2024-10-08 18:43:36.947561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.608 [2024-10-08 18:43:36.948122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.609 [2024-10-08 18:43:36.948682] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.609 [2024-10-08 18:43:36.948737] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.609 [2024-10-08 18:43:36.948771] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.609 [2024-10-08 18:43:36.956815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.609 [2024-10-08 18:43:36.965354] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.609 [2024-10-08 18:43:36.966249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.609 [2024-10-08 18:43:36.966320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.609 [2024-10-08 18:43:36.966359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.609 [2024-10-08 18:43:36.966918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.609 [2024-10-08 18:43:36.967463] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.609 [2024-10-08 18:43:36.967515] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.609 [2024-10-08 18:43:36.967549] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.609 [2024-10-08 18:43:36.975596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.609 [2024-10-08 18:43:36.984170] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.609 [2024-10-08 18:43:36.984997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.609 [2024-10-08 18:43:36.985068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.609 [2024-10-08 18:43:36.985109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.609 [2024-10-08 18:43:36.985685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.609 [2024-10-08 18:43:36.986231] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.609 [2024-10-08 18:43:36.986283] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.609 [2024-10-08 18:43:36.986316] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.609 [2024-10-08 18:43:36.994366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.609 [2024-10-08 18:43:37.002932] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.609 [2024-10-08 18:43:37.003746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.609 [2024-10-08 18:43:37.003818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.609 [2024-10-08 18:43:37.003859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.609 [2024-10-08 18:43:37.004393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.609 [2024-10-08 18:43:37.004981] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.609 [2024-10-08 18:43:37.005036] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.609 [2024-10-08 18:43:37.005070] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.609 [2024-10-08 18:43:37.013132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.609 [2024-10-08 18:43:37.021688] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.609 [2024-10-08 18:43:37.022549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.609 [2024-10-08 18:43:37.022620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.609 [2024-10-08 18:43:37.022682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.609 [2024-10-08 18:43:37.023220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.609 [2024-10-08 18:43:37.023786] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.609 [2024-10-08 18:43:37.023838] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.609 [2024-10-08 18:43:37.023872] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.609 [2024-10-08 18:43:37.031908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.609 [2024-10-08 18:43:37.040448] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.609 [2024-10-08 18:43:37.041278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.609 [2024-10-08 18:43:37.041347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.609 [2024-10-08 18:43:37.041387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.609 [2024-10-08 18:43:37.041944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.609 [2024-10-08 18:43:37.042485] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.609 [2024-10-08 18:43:37.042537] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.609 [2024-10-08 18:43:37.042591] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.609 [2024-10-08 18:43:37.050638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.609 [2024-10-08 18:43:37.059192] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.609 [2024-10-08 18:43:37.060009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.609 [2024-10-08 18:43:37.060079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.609 [2024-10-08 18:43:37.060119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.609 [2024-10-08 18:43:37.060674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.609 [2024-10-08 18:43:37.061216] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.609 [2024-10-08 18:43:37.061267] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.609 [2024-10-08 18:43:37.061302] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.609 [2024-10-08 18:43:37.069355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.609 [2024-10-08 18:43:37.077959] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.609 [2024-10-08 18:43:37.078741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.609 [2024-10-08 18:43:37.078813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.609 [2024-10-08 18:43:37.078854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.609 [2024-10-08 18:43:37.079388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.609 [2024-10-08 18:43:37.079950] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.609 [2024-10-08 18:43:37.080004] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.609 [2024-10-08 18:43:37.080038] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.609 [2024-10-08 18:43:37.088096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.609 [2024-10-08 18:43:37.096681] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.609 [2024-10-08 18:43:37.097429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.609 [2024-10-08 18:43:37.097499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.609 [2024-10-08 18:43:37.097540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.609 [2024-10-08 18:43:37.098100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.609 [2024-10-08 18:43:37.098642] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.609 [2024-10-08 18:43:37.098719] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.609 [2024-10-08 18:43:37.098753] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.609 [2024-10-08 18:43:37.106845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.609 [2024-10-08 18:43:37.113822] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.609 [2024-10-08 18:43:37.114225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.609 [2024-10-08 18:43:37.114257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.609 [2024-10-08 18:43:37.114275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.609 [2024-10-08 18:43:37.114511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.609 [2024-10-08 18:43:37.114764] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.609 [2024-10-08 18:43:37.114788] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.609 [2024-10-08 18:43:37.114803] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.609 [2024-10-08 18:43:37.118350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.609 [2024-10-08 18:43:37.132827] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.609 [2024-10-08 18:43:37.133605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.609 [2024-10-08 18:43:37.133691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.609 [2024-10-08 18:43:37.133735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.609 [2024-10-08 18:43:37.134269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.609 [2024-10-08 18:43:37.134827] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.609 [2024-10-08 18:43:37.134881] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.609 [2024-10-08 18:43:37.134914] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.609 [2024-10-08 18:43:37.143125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.869 [2024-10-08 18:43:37.151909] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.869 [2024-10-08 18:43:37.152717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.869 [2024-10-08 18:43:37.152791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.869 [2024-10-08 18:43:37.152833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.869 [2024-10-08 18:43:37.153367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.869 [2024-10-08 18:43:37.153936] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.869 [2024-10-08 18:43:37.153991] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.869 [2024-10-08 18:43:37.154024] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.869 [2024-10-08 18:43:37.162077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.869 [2024-10-08 18:43:37.170630] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.869 [2024-10-08 18:43:37.171408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.869 [2024-10-08 18:43:37.171482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.869 [2024-10-08 18:43:37.171523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.869 [2024-10-08 18:43:37.172077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.869 [2024-10-08 18:43:37.172634] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.869 [2024-10-08 18:43:37.172704] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.869 [2024-10-08 18:43:37.172773] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.870 [2024-10-08 18:43:37.180825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.870 [2024-10-08 18:43:37.189390] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.870 [2024-10-08 18:43:37.190315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.870 [2024-10-08 18:43:37.190386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.870 [2024-10-08 18:43:37.190426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.870 [2024-10-08 18:43:37.190984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.870 [2024-10-08 18:43:37.191530] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.870 [2024-10-08 18:43:37.191582] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.870 [2024-10-08 18:43:37.191615] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.870 [2024-10-08 18:43:37.199694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.870 [2024-10-08 18:43:37.208296] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.870 [2024-10-08 18:43:37.209088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.870 [2024-10-08 18:43:37.209161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.870 [2024-10-08 18:43:37.209202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.870 [2024-10-08 18:43:37.209759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.870 [2024-10-08 18:43:37.210303] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.870 [2024-10-08 18:43:37.210366] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.870 [2024-10-08 18:43:37.210401] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.870 [2024-10-08 18:43:37.218452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.870 [2024-10-08 18:43:37.226798] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.870 [2024-10-08 18:43:37.227558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.870 [2024-10-08 18:43:37.227627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.870 [2024-10-08 18:43:37.227687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.870 [2024-10-08 18:43:37.228224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.870 [2024-10-08 18:43:37.228788] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.870 [2024-10-08 18:43:37.228841] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.870 [2024-10-08 18:43:37.228876] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.870 [2024-10-08 18:43:37.236943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.870 [2024-10-08 18:43:37.245496] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.870 [2024-10-08 18:43:37.246336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.870 [2024-10-08 18:43:37.246406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.870 [2024-10-08 18:43:37.246447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.870 [2024-10-08 18:43:37.247011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.870 [2024-10-08 18:43:37.247556] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.870 [2024-10-08 18:43:37.247607] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.870 [2024-10-08 18:43:37.247641] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.870 [2024-10-08 18:43:37.255710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.870 [2024-10-08 18:43:37.264266] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.870 [2024-10-08 18:43:37.265042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.870 [2024-10-08 18:43:37.265112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.870 [2024-10-08 18:43:37.265152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.870 [2024-10-08 18:43:37.265708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.870 [2024-10-08 18:43:37.266249] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.870 [2024-10-08 18:43:37.266300] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.870 [2024-10-08 18:43:37.266333] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.870 [2024-10-08 18:43:37.274381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.870 [2024-10-08 18:43:37.282947] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.870 [2024-10-08 18:43:37.283796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.870 [2024-10-08 18:43:37.283869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.870 [2024-10-08 18:43:37.283910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.870 [2024-10-08 18:43:37.284444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.870 [2024-10-08 18:43:37.285013] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.870 [2024-10-08 18:43:37.285066] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.870 [2024-10-08 18:43:37.285100] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.870 [2024-10-08 18:43:37.293162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.870 [2024-10-08 18:43:37.301714] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.870 [2024-10-08 18:43:37.302531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.870 [2024-10-08 18:43:37.302621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.870 [2024-10-08 18:43:37.302685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.870 [2024-10-08 18:43:37.303222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.870 [2024-10-08 18:43:37.303792] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.870 [2024-10-08 18:43:37.303846] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.870 [2024-10-08 18:43:37.303880] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.870 [2024-10-08 18:43:37.311958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.870 [2024-10-08 18:43:37.320501] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.870 [2024-10-08 18:43:37.321330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.870 [2024-10-08 18:43:37.321408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.870 [2024-10-08 18:43:37.321448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.870 [2024-10-08 18:43:37.322006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.870 [2024-10-08 18:43:37.322551] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.870 [2024-10-08 18:43:37.322602] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.870 [2024-10-08 18:43:37.322636] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.870 [2024-10-08 18:43:37.330699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.870 [2024-10-08 18:43:37.339240] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.870 [2024-10-08 18:43:37.340052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.870 [2024-10-08 18:43:37.340122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.870 [2024-10-08 18:43:37.340164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.870 [2024-10-08 18:43:37.340722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.870 [2024-10-08 18:43:37.341263] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.870 [2024-10-08 18:43:37.341313] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.870 [2024-10-08 18:43:37.341347] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.870 [2024-10-08 18:43:37.349389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.870 [2024-10-08 18:43:37.357945] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.870 [2024-10-08 18:43:37.358741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.870 [2024-10-08 18:43:37.358812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.870 [2024-10-08 18:43:37.358852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.870 [2024-10-08 18:43:37.359386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.870 [2024-10-08 18:43:37.359964] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.870 [2024-10-08 18:43:37.360018] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.870 [2024-10-08 18:43:37.360052] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.870 [2024-10-08 18:43:37.368094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.870 [2024-10-08 18:43:37.376635] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.870 [2024-10-08 18:43:37.377469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.870 [2024-10-08 18:43:37.377539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.870 [2024-10-08 18:43:37.377579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.870 [2024-10-08 18:43:37.378133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.871 [2024-10-08 18:43:37.378697] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.871 [2024-10-08 18:43:37.378749] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.871 [2024-10-08 18:43:37.378782] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.871 [2024-10-08 18:43:37.386835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:08.871 [2024-10-08 18:43:37.395389] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:08.871 [2024-10-08 18:43:37.396171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.871 [2024-10-08 18:43:37.396240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:08.871 [2024-10-08 18:43:37.396280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:08.871 [2024-10-08 18:43:37.396838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:08.871 [2024-10-08 18:43:37.397380] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:08.871 [2024-10-08 18:43:37.397432] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:08.871 [2024-10-08 18:43:37.397465] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:08.871 [2024-10-08 18:43:37.405729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.130 [2024-10-08 18:43:37.414460] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.130 [2024-10-08 18:43:37.415300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-10-08 18:43:37.415372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.130 [2024-10-08 18:43:37.415412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.130 [2024-10-08 18:43:37.415972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.130 [2024-10-08 18:43:37.416519] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.130 [2024-10-08 18:43:37.416570] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.130 [2024-10-08 18:43:37.416605] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1337324 Killed "${NVMF_APP[@]}" "$@" 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:09.130 [2024-10-08 18:43:37.424673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1338388 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1338388 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1338388 ']' 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:09.130 18:43:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:09.130 [2024-10-08 18:43:37.433227] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.130 [2024-10-08 18:43:37.434023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-10-08 18:43:37.434095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.130 [2024-10-08 18:43:37.434137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.130 [2024-10-08 18:43:37.434704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.130 [2024-10-08 18:43:37.435247] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.130 [2024-10-08 18:43:37.435299] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.130 [2024-10-08 18:43:37.435334] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.130 [2024-10-08 18:43:37.443376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.130 [2024-10-08 18:43:37.452100] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.130 [2024-10-08 18:43:37.452859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-10-08 18:43:37.452931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.130 [2024-10-08 18:43:37.452971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.130 [2024-10-08 18:43:37.453505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.130 [2024-10-08 18:43:37.454072] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.130 [2024-10-08 18:43:37.454125] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.130 [2024-10-08 18:43:37.454159] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.130 [2024-10-08 18:43:37.462207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.130 [2024-10-08 18:43:37.471240] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.130 [2024-10-08 18:43:37.472027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-10-08 18:43:37.472098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.130 [2024-10-08 18:43:37.472139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.130 [2024-10-08 18:43:37.472487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.130 [2024-10-08 18:43:37.473063] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.130 [2024-10-08 18:43:37.473115] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.130 [2024-10-08 18:43:37.473150] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.130 [2024-10-08 18:43:37.481195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.130 [2024-10-08 18:43:37.487803] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:33:09.130 [2024-10-08 18:43:37.487906] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.130 [2024-10-08 18:43:37.490220] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.130 [2024-10-08 18:43:37.491014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-10-08 18:43:37.491083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.130 [2024-10-08 18:43:37.491124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.130 [2024-10-08 18:43:37.491675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.130 [2024-10-08 18:43:37.492217] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.130 [2024-10-08 18:43:37.492270] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.130 [2024-10-08 18:43:37.492304] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.130 [2024-10-08 18:43:37.500353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.130 [2024-10-08 18:43:37.509281] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.130 [2024-10-08 18:43:37.510054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.130 [2024-10-08 18:43:37.510125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.130 [2024-10-08 18:43:37.510165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.130 [2024-10-08 18:43:37.510721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.130 [2024-10-08 18:43:37.511266] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.130 [2024-10-08 18:43:37.511317] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.130 [2024-10-08 18:43:37.511350] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.131 [2024-10-08 18:43:37.519381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.131 [2024-10-08 18:43:37.528429] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.131 [2024-10-08 18:43:37.529196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-10-08 18:43:37.529268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.131 [2024-10-08 18:43:37.529309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.131 [2024-10-08 18:43:37.529865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.131 [2024-10-08 18:43:37.530405] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.131 [2024-10-08 18:43:37.530456] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.131 [2024-10-08 18:43:37.530490] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.131 [2024-10-08 18:43:37.537342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.131 [2024-10-08 18:43:37.545359] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.131 [2024-10-08 18:43:37.546002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-10-08 18:43:37.546074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.131 [2024-10-08 18:43:37.546114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.131 [2024-10-08 18:43:37.546669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.131 [2024-10-08 18:43:37.547013] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.131 [2024-10-08 18:43:37.547065] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.131 [2024-10-08 18:43:37.547098] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.131 [2024-10-08 18:43:37.553606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.131 [2024-10-08 18:43:37.562412] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.131 [2024-10-08 18:43:37.563024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-10-08 18:43:37.563095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.131 [2024-10-08 18:43:37.563134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.131 [2024-10-08 18:43:37.563704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.131 [2024-10-08 18:43:37.564016] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.131 [2024-10-08 18:43:37.564069] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.131 [2024-10-08 18:43:37.564102] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.131 [2024-10-08 18:43:37.570705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.131 [2024-10-08 18:43:37.579367] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.131 [2024-10-08 18:43:37.580021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-10-08 18:43:37.580091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.131 [2024-10-08 18:43:37.580143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.131 [2024-10-08 18:43:37.580706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.131 [2024-10-08 18:43:37.580947] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.131 [2024-10-08 18:43:37.581005] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.131 [2024-10-08 18:43:37.581041] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.131 [2024-10-08 18:43:37.587572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.131 [2024-10-08 18:43:37.593758] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:09.131 [2024-10-08 18:43:37.596352] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.131 [2024-10-08 18:43:37.596998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-10-08 18:43:37.597067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.131 [2024-10-08 18:43:37.597107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.131 [2024-10-08 18:43:37.597640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.131 [2024-10-08 18:43:37.597983] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.131 [2024-10-08 18:43:37.598036] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.131 [2024-10-08 18:43:37.598071] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.131 [2024-10-08 18:43:37.604676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.131 [2024-10-08 18:43:37.613308] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.131 [2024-10-08 18:43:37.614044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-10-08 18:43:37.614127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.131 [2024-10-08 18:43:37.614182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.131 [2024-10-08 18:43:37.614736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.131 [2024-10-08 18:43:37.615074] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.131 [2024-10-08 18:43:37.615128] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.131 [2024-10-08 18:43:37.615166] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.131 [2024-10-08 18:43:37.621739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.131 [2024-10-08 18:43:37.630546] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.131 [2024-10-08 18:43:37.631089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-10-08 18:43:37.631161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.131 [2024-10-08 18:43:37.631203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.131 [2024-10-08 18:43:37.631733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.131 [2024-10-08 18:43:37.632064] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.131 [2024-10-08 18:43:37.632133] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.131 [2024-10-08 18:43:37.632170] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.131 [2024-10-08 18:43:37.638754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.131 [2024-10-08 18:43:37.647503] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.131 [2024-10-08 18:43:37.648054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-10-08 18:43:37.648127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.131 [2024-10-08 18:43:37.648168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.131 [2024-10-08 18:43:37.648722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.131 [2024-10-08 18:43:37.649023] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.131 [2024-10-08 18:43:37.649078] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.131 [2024-10-08 18:43:37.649112] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.131 [2024-10-08 18:43:37.655710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.131 [2024-10-08 18:43:37.664510] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.131 [2024-10-08 18:43:37.665008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.131 [2024-10-08 18:43:37.665046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.131 [2024-10-08 18:43:37.665068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.131 [2024-10-08 18:43:37.665589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.131 [2024-10-08 18:43:37.665915] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.131 [2024-10-08 18:43:37.665941] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.131 [2024-10-08 18:43:37.665957] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.391 [2024-10-08 18:43:37.672478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.391 [2024-10-08 18:43:37.681619] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.391 [2024-10-08 18:43:37.682133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.391 [2024-10-08 18:43:37.682204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.391 [2024-10-08 18:43:37.682246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.391 [2024-10-08 18:43:37.682764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.391 [2024-10-08 18:43:37.683079] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.391 [2024-10-08 18:43:37.683132] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.391 [2024-10-08 18:43:37.683167] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.391 [2024-10-08 18:43:37.689639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.391 [2024-10-08 18:43:37.698840] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.391 [2024-10-08 18:43:37.699613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.391 [2024-10-08 18:43:37.699709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.391 [2024-10-08 18:43:37.699731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.391 [2024-10-08 18:43:37.700086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.391 [2024-10-08 18:43:37.700630] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.391 [2024-10-08 18:43:37.700706] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.391 [2024-10-08 18:43:37.700724] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.391 [2024-10-08 18:43:37.707125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.391 [2024-10-08 18:43:37.715899] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.391 [2024-10-08 18:43:37.716702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.391 [2024-10-08 18:43:37.716739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.391 [2024-10-08 18:43:37.716759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.391 [2024-10-08 18:43:37.717090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.391 [2024-10-08 18:43:37.717636] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.391 [2024-10-08 18:43:37.717712] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.391 [2024-10-08 18:43:37.717730] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.391 [2024-10-08 18:43:37.724055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.391 [2024-10-08 18:43:37.733258] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.391 [2024-10-08 18:43:37.733907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.391 [2024-10-08 18:43:37.733938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.391 [2024-10-08 18:43:37.733956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.391 [2024-10-08 18:43:37.734502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.391 [2024-10-08 18:43:37.734867] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.391 [2024-10-08 18:43:37.734892] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.391 [2024-10-08 18:43:37.734909] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.391 [2024-10-08 18:43:37.741463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.391 [2024-10-08 18:43:37.750144] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.391 [2024-10-08 18:43:37.750901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.391 [2024-10-08 18:43:37.750953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.391 [2024-10-08 18:43:37.751013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.391 [2024-10-08 18:43:37.751550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.391 [2024-10-08 18:43:37.751891] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.391 [2024-10-08 18:43:37.751916] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.391 [2024-10-08 18:43:37.751932] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.391 [2024-10-08 18:43:37.758377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.391 [2024-10-08 18:43:37.767131] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.391 [2024-10-08 18:43:37.767897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.391 [2024-10-08 18:43:37.767928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.391 [2024-10-08 18:43:37.767946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.391 [2024-10-08 18:43:37.768491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.392 [2024-10-08 18:43:37.768873] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.392 [2024-10-08 18:43:37.768897] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.392 [2024-10-08 18:43:37.768913] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.392 [2024-10-08 18:43:37.775356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.392 [2024-10-08 18:43:37.783545] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.392 [2024-10-08 18:43:37.783981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.392 [2024-10-08 18:43:37.784014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.392 [2024-10-08 18:43:37.784032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.392 [2024-10-08 18:43:37.784271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.392 [2024-10-08 18:43:37.784551] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.392 [2024-10-08 18:43:37.784604] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.392 [2024-10-08 18:43:37.784637] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.392 [2024-10-08 18:43:37.784906] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.392 [2024-10-08 18:43:37.784980] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.392 [2024-10-08 18:43:37.785017] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.392 [2024-10-08 18:43:37.785049] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.392 [2024-10-08 18:43:37.785075] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.392 [2024-10-08 18:43:37.786875] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:09.392 [2024-10-08 18:43:37.786907] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:09.392 [2024-10-08 18:43:37.786912] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.392 [2024-10-08 18:43:37.789174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.392 [2024-10-08 18:43:37.797380] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.392 [2024-10-08 18:43:37.797928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.392 [2024-10-08 18:43:37.797968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.392 [2024-10-08 18:43:37.797988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.392 [2024-10-08 18:43:37.798240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.392 [2024-10-08 18:43:37.798485] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.392 [2024-10-08 18:43:37.798509] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.392 [2024-10-08 18:43:37.798527] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.392 [2024-10-08 18:43:37.802090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.392 [2024-10-08 18:43:37.811356] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.392 [2024-10-08 18:43:37.811903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.392 [2024-10-08 18:43:37.811943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.392 [2024-10-08 18:43:37.811974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.392 [2024-10-08 18:43:37.812222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.392 [2024-10-08 18:43:37.812467] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.392 [2024-10-08 18:43:37.812490] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.392 [2024-10-08 18:43:37.812508] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.392 [2024-10-08 18:43:37.816068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.392 [2024-10-08 18:43:37.825306] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.392 [2024-10-08 18:43:37.825814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.392 [2024-10-08 18:43:37.825855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.392 [2024-10-08 18:43:37.825876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.392 [2024-10-08 18:43:37.826123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.392 [2024-10-08 18:43:37.826367] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.392 [2024-10-08 18:43:37.826390] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.392 [2024-10-08 18:43:37.826408] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.392 [2024-10-08 18:43:37.829972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.392 [2024-10-08 18:43:37.839215] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.392 [2024-10-08 18:43:37.839788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.392 [2024-10-08 18:43:37.839842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.392 [2024-10-08 18:43:37.839874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.392 [2024-10-08 18:43:37.840126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.392 [2024-10-08 18:43:37.840370] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.392 [2024-10-08 18:43:37.840393] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.392 [2024-10-08 18:43:37.840411] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.392 [2024-10-08 18:43:37.843974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.392 [2024-10-08 18:43:37.853197] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.392 [2024-10-08 18:43:37.853696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.392 [2024-10-08 18:43:37.853744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.392 [2024-10-08 18:43:37.853765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.392 [2024-10-08 18:43:37.854013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.392 [2024-10-08 18:43:37.854256] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.392 [2024-10-08 18:43:37.854279] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.392 [2024-10-08 18:43:37.854296] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.392 [2024-10-08 18:43:37.857858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.392 [2024-10-08 18:43:37.867091] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.392 [2024-10-08 18:43:37.867686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.392 [2024-10-08 18:43:37.867730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.392 [2024-10-08 18:43:37.867751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.392 [2024-10-08 18:43:37.868001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.392 [2024-10-08 18:43:37.868245] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.392 [2024-10-08 18:43:37.868268] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.392 [2024-10-08 18:43:37.868286] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.392 [2024-10-08 18:43:37.871847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.392 [2024-10-08 18:43:37.881073] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.392 [2024-10-08 18:43:37.881589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.392 [2024-10-08 18:43:37.881626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.392 [2024-10-08 18:43:37.881646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.392 [2024-10-08 18:43:37.881900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.392 [2024-10-08 18:43:37.882142] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.392 [2024-10-08 18:43:37.882166] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.392 [2024-10-08 18:43:37.882193] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.392 [2024-10-08 18:43:37.885748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.392 [2024-10-08 18:43:37.894967] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.392 [2024-10-08 18:43:37.895372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.392 [2024-10-08 18:43:37.895403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.392 [2024-10-08 18:43:37.895421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.392 [2024-10-08 18:43:37.895668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.392 [2024-10-08 18:43:37.895910] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.392 [2024-10-08 18:43:37.895933] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.392 [2024-10-08 18:43:37.895949] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.392 [2024-10-08 18:43:37.899499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.392 3714.50 IOPS, 14.51 MiB/s [2024-10-08T16:43:37.929Z] [2024-10-08 18:43:37.909237] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.392 [2024-10-08 18:43:37.909675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.393 [2024-10-08 18:43:37.909708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.393 [2024-10-08 18:43:37.909726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.393 [2024-10-08 18:43:37.909970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.393 [2024-10-08 18:43:37.910210] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.393 [2024-10-08 18:43:37.910233] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.393 [2024-10-08 18:43:37.910249] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.393 [2024-10-08 18:43:37.913804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.393 [2024-10-08 18:43:37.923246] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.393 [2024-10-08 18:43:37.923684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.393 [2024-10-08 18:43:37.923717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.393 [2024-10-08 18:43:37.923735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.393 [2024-10-08 18:43:37.923973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.393 [2024-10-08 18:43:37.924223] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.393 [2024-10-08 18:43:37.924247] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.393 [2024-10-08 18:43:37.924262] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.653 [2024-10-08 18:43:37.927876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.653 [2024-10-08 18:43:37.937164] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.653 [2024-10-08 18:43:37.937597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.653 [2024-10-08 18:43:37.937629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.653 [2024-10-08 18:43:37.937646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.653 [2024-10-08 18:43:37.937896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.653 [2024-10-08 18:43:37.938137] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.653 [2024-10-08 18:43:37.938160] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.653 [2024-10-08 18:43:37.938175] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.653 [2024-10-08 18:43:37.941726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.653 [2024-10-08 18:43:37.951145] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.653 [2024-10-08 18:43:37.951579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.653 [2024-10-08 18:43:37.951610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.653 [2024-10-08 18:43:37.951628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.653 [2024-10-08 18:43:37.951877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.653 [2024-10-08 18:43:37.952117] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.653 [2024-10-08 18:43:37.952140] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.653 [2024-10-08 18:43:37.952156] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.653 [2024-10-08 18:43:37.955705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.653 [2024-10-08 18:43:37.965116] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.653 [2024-10-08 18:43:37.965582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.653 [2024-10-08 18:43:37.965612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.653 [2024-10-08 18:43:37.965630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.653 [2024-10-08 18:43:37.965877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.653 [2024-10-08 18:43:37.966118] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.653 [2024-10-08 18:43:37.966141] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.653 [2024-10-08 18:43:37.966157] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.653 [2024-10-08 18:43:37.969709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.653 [2024-10-08 18:43:37.979123] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.653 [2024-10-08 18:43:37.979560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.653 [2024-10-08 18:43:37.979602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.653 [2024-10-08 18:43:37.979619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.653 [2024-10-08 18:43:37.979871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.653 [2024-10-08 18:43:37.980113] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.653 [2024-10-08 18:43:37.980136] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.653 [2024-10-08 18:43:37.980151] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.653 [2024-10-08 18:43:37.983697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.653 [2024-10-08 18:43:37.993115] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.653 [2024-10-08 18:43:37.993562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.653 [2024-10-08 18:43:37.993604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.653 [2024-10-08 18:43:37.993621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.653 [2024-10-08 18:43:37.993869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.653 [2024-10-08 18:43:37.994110] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.653 [2024-10-08 18:43:37.994133] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.653 [2024-10-08 18:43:37.994148] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.653 [2024-10-08 18:43:37.997695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.653 [2024-10-08 18:43:38.007106] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.653 [2024-10-08 18:43:38.007539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.653 [2024-10-08 18:43:38.007571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.653 [2024-10-08 18:43:38.007589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.653 [2024-10-08 18:43:38.007835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.653 [2024-10-08 18:43:38.008077] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.653 [2024-10-08 18:43:38.008100] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.653 [2024-10-08 18:43:38.008115] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.653 [2024-10-08 18:43:38.011681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.653 [2024-10-08 18:43:38.021096] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.653 [2024-10-08 18:43:38.021560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.653 [2024-10-08 18:43:38.021590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.653 [2024-10-08 18:43:38.021608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.653 [2024-10-08 18:43:38.021855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.653 [2024-10-08 18:43:38.022096] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.653 [2024-10-08 18:43:38.022119] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.653 [2024-10-08 18:43:38.022141] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.653 [2024-10-08 18:43:38.025689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.653 [2024-10-08 18:43:38.035098] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.653 [2024-10-08 18:43:38.035550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.654 [2024-10-08 18:43:38.035582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.654 [2024-10-08 18:43:38.035600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.654 [2024-10-08 18:43:38.035848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.654 [2024-10-08 18:43:38.036089] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.654 [2024-10-08 18:43:38.036112] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.654 [2024-10-08 18:43:38.036127] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.654 [2024-10-08 18:43:38.039676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.654 [2024-10-08 18:43:38.049090] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.654 [2024-10-08 18:43:38.049532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.654 [2024-10-08 18:43:38.049563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.654 [2024-10-08 18:43:38.049581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.654 [2024-10-08 18:43:38.049828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.654 [2024-10-08 18:43:38.050069] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.654 [2024-10-08 18:43:38.050092] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.654 [2024-10-08 18:43:38.050107] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.654 [2024-10-08 18:43:38.053647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.654 [2024-10-08 18:43:38.063066] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.654 [2024-10-08 18:43:38.063521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.654 [2024-10-08 18:43:38.063552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.654 [2024-10-08 18:43:38.063570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.654 [2024-10-08 18:43:38.063816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.654 [2024-10-08 18:43:38.064057] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.654 [2024-10-08 18:43:38.064080] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.654 [2024-10-08 18:43:38.064095] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.654 [2024-10-08 18:43:38.067638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.654 [2024-10-08 18:43:38.076859] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.654 [2024-10-08 18:43:38.077296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.654 [2024-10-08 18:43:38.077343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.654 [2024-10-08 18:43:38.077360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.654 [2024-10-08 18:43:38.077572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.654 [2024-10-08 18:43:38.077797] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.654 [2024-10-08 18:43:38.077818] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.654 [2024-10-08 18:43:38.077832] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.654 [2024-10-08 18:43:38.081067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.654 [2024-10-08 18:43:38.090402] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.654 [2024-10-08 18:43:38.090843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.654 [2024-10-08 18:43:38.090871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.654 [2024-10-08 18:43:38.090887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.654 [2024-10-08 18:43:38.091100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.654 [2024-10-08 18:43:38.091316] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.654 [2024-10-08 18:43:38.091337] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.654 [2024-10-08 18:43:38.091351] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.654 [2024-10-08 18:43:38.094563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.654 [2024-10-08 18:43:38.103896] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.654 [2024-10-08 18:43:38.104334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.654 [2024-10-08 18:43:38.104375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.654 [2024-10-08 18:43:38.104392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.654 [2024-10-08 18:43:38.104605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.654 [2024-10-08 18:43:38.104829] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.654 [2024-10-08 18:43:38.104850] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.654 [2024-10-08 18:43:38.104864] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.654 [2024-10-08 18:43:38.108103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.654 [2024-10-08 18:43:38.117458] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.654 [2024-10-08 18:43:38.117905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.654 [2024-10-08 18:43:38.117933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.654 [2024-10-08 18:43:38.117949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.654 [2024-10-08 18:43:38.118162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.654 [2024-10-08 18:43:38.118384] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.654 [2024-10-08 18:43:38.118405] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.654 [2024-10-08 18:43:38.118419] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.654 [2024-10-08 18:43:38.121623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.654 [2024-10-08 18:43:38.130963] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.654 [2024-10-08 18:43:38.131416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.654 [2024-10-08 18:43:38.131458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.654 [2024-10-08 18:43:38.131474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.654 [2024-10-08 18:43:38.131695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.654 [2024-10-08 18:43:38.131912] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.654 [2024-10-08 18:43:38.131933] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.654 [2024-10-08 18:43:38.131946] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.654 [2024-10-08 18:43:38.135182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.654 [2024-10-08 18:43:38.144523] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.654 [2024-10-08 18:43:38.144926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.654 [2024-10-08 18:43:38.144954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.654 [2024-10-08 18:43:38.144970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.654 [2024-10-08 18:43:38.145183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.654 [2024-10-08 18:43:38.145399] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.654 [2024-10-08 18:43:38.145419] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.654 [2024-10-08 18:43:38.145433] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.654 [2024-10-08 18:43:38.148655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.654 [2024-10-08 18:43:38.158159] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.654 [2024-10-08 18:43:38.158602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.654 [2024-10-08 18:43:38.158629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.654 [2024-10-08 18:43:38.158667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.654 [2024-10-08 18:43:38.158881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.654 [2024-10-08 18:43:38.159097] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.654 [2024-10-08 18:43:38.159117] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.654 [2024-10-08 18:43:38.159131] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.654 [2024-10-08 18:43:38.162374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.654 [2024-10-08 18:43:38.171699] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.654 [2024-10-08 18:43:38.172130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.654 [2024-10-08 18:43:38.172172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.654 [2024-10-08 18:43:38.172188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.654 [2024-10-08 18:43:38.172401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.654 [2024-10-08 18:43:38.172617] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.654 [2024-10-08 18:43:38.172637] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.655 [2024-10-08 18:43:38.172659] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.655 [2024-10-08 18:43:38.175886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.655 [2024-10-08 18:43:38.185252] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.655 [2024-10-08 18:43:38.185664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.655 [2024-10-08 18:43:38.185694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.655 [2024-10-08 18:43:38.185710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.655 [2024-10-08 18:43:38.185932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.655 [2024-10-08 18:43:38.186149] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.655 [2024-10-08 18:43:38.186170] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.655 [2024-10-08 18:43:38.186183] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.915 [2024-10-08 18:43:38.189472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.915 [2024-10-08 18:43:38.198904] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.915 [2024-10-08 18:43:38.199380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.915 [2024-10-08 18:43:38.199408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.915 [2024-10-08 18:43:38.199439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.915 [2024-10-08 18:43:38.199661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.915 [2024-10-08 18:43:38.199878] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.915 [2024-10-08 18:43:38.199899] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.915 [2024-10-08 18:43:38.199913] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.915 [2024-10-08 18:43:38.203149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.915 [2024-10-08 18:43:38.212501] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.915 [2024-10-08 18:43:38.212989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.915 [2024-10-08 18:43:38.213032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.915 [2024-10-08 18:43:38.213055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.915 [2024-10-08 18:43:38.213270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.915 [2024-10-08 18:43:38.213486] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.915 [2024-10-08 18:43:38.213507] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.915 [2024-10-08 18:43:38.213521] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.915 [2024-10-08 18:43:38.216746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.915 [2024-10-08 18:43:38.226075] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.915 [2024-10-08 18:43:38.226527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.915 [2024-10-08 18:43:38.226569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.915 [2024-10-08 18:43:38.226586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.915 [2024-10-08 18:43:38.226810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.915 [2024-10-08 18:43:38.227026] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.915 [2024-10-08 18:43:38.227047] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.915 [2024-10-08 18:43:38.227061] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.915 [2024-10-08 18:43:38.230301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.915 [2024-10-08 18:43:38.239617] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.915 [2024-10-08 18:43:38.240046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.915 [2024-10-08 18:43:38.240075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.915 [2024-10-08 18:43:38.240091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.915 [2024-10-08 18:43:38.240305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.915 [2024-10-08 18:43:38.240521] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.915 [2024-10-08 18:43:38.240542] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.915 [2024-10-08 18:43:38.240555] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.915 [2024-10-08 18:43:38.243754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.915 [2024-10-08 18:43:38.253171] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.915 [2024-10-08 18:43:38.253546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.915 [2024-10-08 18:43:38.253575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.915 [2024-10-08 18:43:38.253591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.915 [2024-10-08 18:43:38.253817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.915 [2024-10-08 18:43:38.254035] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.915 [2024-10-08 18:43:38.254062] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.915 [2024-10-08 18:43:38.254078] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.915 [2024-10-08 18:43:38.257320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.915 [2024-10-08 18:43:38.266672] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.915 [2024-10-08 18:43:38.267063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.915 [2024-10-08 18:43:38.267104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.915 [2024-10-08 18:43:38.267120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.915 [2024-10-08 18:43:38.267346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.915 [2024-10-08 18:43:38.267563] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.915 [2024-10-08 18:43:38.267583] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.915 [2024-10-08 18:43:38.267597] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.915 [2024-10-08 18:43:38.270843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.915 [2024-10-08 18:43:38.280161] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.915 [2024-10-08 18:43:38.280553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.915 [2024-10-08 18:43:38.280595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.915 [2024-10-08 18:43:38.280610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.915 [2024-10-08 18:43:38.280847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.915 [2024-10-08 18:43:38.281064] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.915 [2024-10-08 18:43:38.281085] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.915 [2024-10-08 18:43:38.281099] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.915 [2024-10-08 18:43:38.284341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.915 [2024-10-08 18:43:38.293700] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.915 [2024-10-08 18:43:38.294112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.915 [2024-10-08 18:43:38.294139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.915 [2024-10-08 18:43:38.294155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.915 [2024-10-08 18:43:38.294384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.915 [2024-10-08 18:43:38.294600] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.915 [2024-10-08 18:43:38.294620] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.915 [2024-10-08 18:43:38.294634] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.915 [2024-10-08 18:43:38.297869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.915 [2024-10-08 18:43:38.307245] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.915 [2024-10-08 18:43:38.307644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.915 [2024-10-08 18:43:38.307680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.915 [2024-10-08 18:43:38.307696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.915 [2024-10-08 18:43:38.307910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.915 [2024-10-08 18:43:38.308126] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.915 [2024-10-08 18:43:38.308147] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.915 [2024-10-08 18:43:38.308161] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.915 [2024-10-08 18:43:38.311420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.915 [2024-10-08 18:43:38.320766] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.915 [2024-10-08 18:43:38.321104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.915 [2024-10-08 18:43:38.321132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.915 [2024-10-08 18:43:38.321148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.915 [2024-10-08 18:43:38.321362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.915 [2024-10-08 18:43:38.321579] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.915 [2024-10-08 18:43:38.321599] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.915 [2024-10-08 18:43:38.321613] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.916 [2024-10-08 18:43:38.324856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.916 [2024-10-08 18:43:38.334381] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.916 [2024-10-08 18:43:38.334722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.916 [2024-10-08 18:43:38.334751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.916 [2024-10-08 18:43:38.334767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.916 [2024-10-08 18:43:38.334980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.916 [2024-10-08 18:43:38.335196] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.916 [2024-10-08 18:43:38.335217] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.916 [2024-10-08 18:43:38.335231] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.916 [2024-10-08 18:43:38.338485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.916 [2024-10-08 18:43:38.348037] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.916 [2024-10-08 18:43:38.348409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.916 [2024-10-08 18:43:38.348437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.916 [2024-10-08 18:43:38.348458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.916 [2024-10-08 18:43:38.348682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.916 [2024-10-08 18:43:38.348905] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.916 [2024-10-08 18:43:38.348926] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.916 [2024-10-08 18:43:38.348939] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.916 [2024-10-08 18:43:38.352181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.916 [2024-10-08 18:43:38.361539] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.916 [2024-10-08 18:43:38.361882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.916 [2024-10-08 18:43:38.361911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.916 [2024-10-08 18:43:38.361927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.916 [2024-10-08 18:43:38.362141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.916 [2024-10-08 18:43:38.362357] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.916 [2024-10-08 18:43:38.362377] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.916 [2024-10-08 18:43:38.362390] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.916 [2024-10-08 18:43:38.365602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.916 [2024-10-08 18:43:38.375156] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.916 [2024-10-08 18:43:38.375565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.916 [2024-10-08 18:43:38.375592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.916 [2024-10-08 18:43:38.375607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.916 [2024-10-08 18:43:38.375845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.916 [2024-10-08 18:43:38.376063] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.916 [2024-10-08 18:43:38.376083] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.916 [2024-10-08 18:43:38.376096] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.916 [2024-10-08 18:43:38.379337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.916 [2024-10-08 18:43:38.388683] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.916 [2024-10-08 18:43:38.389064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.916 [2024-10-08 18:43:38.389106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.916 [2024-10-08 18:43:38.389121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.916 [2024-10-08 18:43:38.389348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.916 [2024-10-08 18:43:38.389565] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.916 [2024-10-08 18:43:38.389591] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.916 [2024-10-08 18:43:38.389605] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.916 [2024-10-08 18:43:38.392844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.916 [2024-10-08 18:43:38.402175] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.916 [2024-10-08 18:43:38.402570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.916 [2024-10-08 18:43:38.402613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.916 [2024-10-08 18:43:38.402628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.916 [2024-10-08 18:43:38.402863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.916 [2024-10-08 18:43:38.403080] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.916 [2024-10-08 18:43:38.403101] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.916 [2024-10-08 18:43:38.403115] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.916 [2024-10-08 18:43:38.406389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.916 [2024-10-08 18:43:38.415747] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.916 [2024-10-08 18:43:38.416151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.916 [2024-10-08 18:43:38.416179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.916 [2024-10-08 18:43:38.416194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.916 [2024-10-08 18:43:38.416423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.916 [2024-10-08 18:43:38.416638] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.916 [2024-10-08 18:43:38.416668] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.916 [2024-10-08 18:43:38.416682] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.916 [2024-10-08 18:43:38.419906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.916 [2024-10-08 18:43:38.429281] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.916 [2024-10-08 18:43:38.429658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.916 [2024-10-08 18:43:38.429686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.916 [2024-10-08 18:43:38.429703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.916 [2024-10-08 18:43:38.429917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.916 [2024-10-08 18:43:38.430133] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.916 [2024-10-08 18:43:38.430153] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.916 [2024-10-08 18:43:38.430167] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.916 [2024-10-08 18:43:38.433408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:09.916 [2024-10-08 18:43:38.442936] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:09.916 [2024-10-08 18:43:38.443361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.916 [2024-10-08 18:43:38.443404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:09.916 [2024-10-08 18:43:38.443420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:09.916 [2024-10-08 18:43:38.443647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:09.916 [2024-10-08 18:43:38.443874] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:09.916 [2024-10-08 18:43:38.443895] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:09.916 [2024-10-08 18:43:38.443908] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:09.916 [2024-10-08 18:43:38.447227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.176 [2024-10-08 18:43:38.456509] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.176 [2024-10-08 18:43:38.456937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.176 [2024-10-08 18:43:38.456966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.176 [2024-10-08 18:43:38.456983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.176 [2024-10-08 18:43:38.457196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.176 [2024-10-08 18:43:38.457412] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.176 [2024-10-08 18:43:38.457434] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.176 [2024-10-08 18:43:38.457448] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.176 [2024-10-08 18:43:38.460673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.176 [2024-10-08 18:43:38.470144] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.176 [2024-10-08 18:43:38.470558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.176 [2024-10-08 18:43:38.470585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.176 [2024-10-08 18:43:38.470601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.176 [2024-10-08 18:43:38.470838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.176 [2024-10-08 18:43:38.471055] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.176 [2024-10-08 18:43:38.471076] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.176 [2024-10-08 18:43:38.471090] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.176 [2024-10-08 18:43:38.474367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.176 [2024-10-08 18:43:38.483712] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.176 [2024-10-08 18:43:38.484122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.176 [2024-10-08 18:43:38.484150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.176 [2024-10-08 18:43:38.484181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.176 [2024-10-08 18:43:38.484401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.176 [2024-10-08 18:43:38.484617] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.176 [2024-10-08 18:43:38.484639] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.176 [2024-10-08 18:43:38.484662] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.176 [2024-10-08 18:43:38.487889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.176 [2024-10-08 18:43:38.497230] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.176 [2024-10-08 18:43:38.497610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.176 [2024-10-08 18:43:38.497639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.176 [2024-10-08 18:43:38.497663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.176 [2024-10-08 18:43:38.497885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.176 [2024-10-08 18:43:38.498101] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.176 [2024-10-08 18:43:38.498122] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.176 [2024-10-08 18:43:38.498136] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.176 [2024-10-08 18:43:38.501377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.176 [2024-10-08 18:43:38.510671] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.176 [2024-10-08 18:43:38.511019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.176 [2024-10-08 18:43:38.511062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.176 [2024-10-08 18:43:38.511077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.176 [2024-10-08 18:43:38.511284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.177 [2024-10-08 18:43:38.511493] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.177 [2024-10-08 18:43:38.511513] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.177 [2024-10-08 18:43:38.511526] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.177 [2024-10-08 18:43:38.514690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.177 [2024-10-08 18:43:38.524399] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.177 [2024-10-08 18:43:38.524838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.177 [2024-10-08 18:43:38.524881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.177 [2024-10-08 18:43:38.524898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.177 [2024-10-08 18:43:38.525132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.177 [2024-10-08 18:43:38.525342] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.177 [2024-10-08 18:43:38.525363] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.177 [2024-10-08 18:43:38.525381] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.177 [2024-10-08 18:43:38.528531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.177 [2024-10-08 18:43:38.537943] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.177 [2024-10-08 18:43:38.538318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.177 [2024-10-08 18:43:38.538346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.177 [2024-10-08 18:43:38.538361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.177 [2024-10-08 18:43:38.538568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.177 [2024-10-08 18:43:38.538808] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.177 [2024-10-08 18:43:38.538829] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.177 [2024-10-08 18:43:38.538843] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.177 [2024-10-08 18:43:38.542042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.177 [2024-10-08 18:43:38.551439] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.177 [2024-10-08 18:43:38.551844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.177 [2024-10-08 18:43:38.551873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.177 [2024-10-08 18:43:38.551890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.177 [2024-10-08 18:43:38.552104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.177 [2024-10-08 18:43:38.552320] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.177 [2024-10-08 18:43:38.552341] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.177 [2024-10-08 18:43:38.552355] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.177 [2024-10-08 18:43:38.555565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.177 [2024-10-08 18:43:38.564851] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.177 [2024-10-08 18:43:38.565240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.177 [2024-10-08 18:43:38.565280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.177 [2024-10-08 18:43:38.565295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.177 [2024-10-08 18:43:38.565515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.177 [2024-10-08 18:43:38.565754] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.177 [2024-10-08 18:43:38.565777] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.177 [2024-10-08 18:43:38.565790] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.177 [2024-10-08 18:43:38.568978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.177 [2024-10-08 18:43:38.578272] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.177 [2024-10-08 18:43:38.578665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.177 [2024-10-08 18:43:38.578712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.177 [2024-10-08 18:43:38.578729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.177 [2024-10-08 18:43:38.578957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.177 [2024-10-08 18:43:38.579183] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.177 [2024-10-08 18:43:38.579203] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.177 [2024-10-08 18:43:38.579216] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.177 [2024-10-08 18:43:38.582364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.177 [2024-10-08 18:43:38.591671] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.177 [2024-10-08 18:43:38.592037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.177 [2024-10-08 18:43:38.592079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.177 [2024-10-08 18:43:38.592094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.177 [2024-10-08 18:43:38.592315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.177 [2024-10-08 18:43:38.592524] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.177 [2024-10-08 18:43:38.592544] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.177 [2024-10-08 18:43:38.592558] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.177 [2024-10-08 18:43:38.595719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.177 [2024-10-08 18:43:38.605192] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.177 [2024-10-08 18:43:38.605557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.177 [2024-10-08 18:43:38.605598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.177 [2024-10-08 18:43:38.605614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.177 [2024-10-08 18:43:38.605851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.177 [2024-10-08 18:43:38.606079] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.177 [2024-10-08 18:43:38.606101] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.177 [2024-10-08 18:43:38.606114] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.177 [2024-10-08 18:43:38.609278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.177 [2024-10-08 18:43:38.618697] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.177 [2024-10-08 18:43:38.619112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.177 [2024-10-08 18:43:38.619139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.177 [2024-10-08 18:43:38.619154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.177 [2024-10-08 18:43:38.619379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.177 [2024-10-08 18:43:38.619598] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.177 [2024-10-08 18:43:38.619619] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.177 [2024-10-08 18:43:38.619647] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.177 [2024-10-08 18:43:38.622805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.177 [2024-10-08 18:43:38.632181] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.177 [2024-10-08 18:43:38.632536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.177 [2024-10-08 18:43:38.632564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.177 [2024-10-08 18:43:38.632579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.177 [2024-10-08 18:43:38.632818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.177 [2024-10-08 18:43:38.633049] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.177 [2024-10-08 18:43:38.633070] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.177 [2024-10-08 18:43:38.633083] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.177 [2024-10-08 18:43:38.636246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.177 [2024-10-08 18:43:38.645739] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.177 [2024-10-08 18:43:38.646118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.177 [2024-10-08 18:43:38.646160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.177 [2024-10-08 18:43:38.646176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.177 [2024-10-08 18:43:38.646403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.177 [2024-10-08 18:43:38.646619] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.177 [2024-10-08 18:43:38.646640] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.177 [2024-10-08 18:43:38.646662] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.177 [2024-10-08 18:43:38.649885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.177 [2024-10-08 18:43:38.659214] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.177 [2024-10-08 18:43:38.659622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.177 [2024-10-08 18:43:38.659657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.178 [2024-10-08 18:43:38.659689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.178 [2024-10-08 18:43:38.659903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.178 [2024-10-08 18:43:38.660119] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.178 [2024-10-08 18:43:38.660140] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.178 [2024-10-08 18:43:38.660153] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.178 [2024-10-08 18:43:38.663411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.178 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:10.178 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:33:10.178 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:10.178 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:10.178 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:10.178 [2024-10-08 18:43:38.672716] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.178 [2024-10-08 18:43:38.673124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.178 [2024-10-08 18:43:38.673151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.178 [2024-10-08 18:43:38.673180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.178 [2024-10-08 18:43:38.673388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.178 [2024-10-08 18:43:38.673597] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.178 [2024-10-08 18:43:38.673617] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.178 [2024-10-08 18:43:38.673645] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.178 [2024-10-08 18:43:38.676820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.178 [2024-10-08 18:43:38.686135] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.178 [2024-10-08 18:43:38.686518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.178 [2024-10-08 18:43:38.686560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.178 [2024-10-08 18:43:38.686576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.178 [2024-10-08 18:43:38.686826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.178 [2024-10-08 18:43:38.687056] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.178 [2024-10-08 18:43:38.687077] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.178 [2024-10-08 18:43:38.687091] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.178 [2024-10-08 18:43:38.690272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.178 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.178 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:10.178 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.178 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:10.178 [2024-10-08 18:43:38.697648] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.178 [2024-10-08 18:43:38.699609] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.178 [2024-10-08 18:43:38.700080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.178 [2024-10-08 18:43:38.700109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.178 [2024-10-08 18:43:38.700139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.178 [2024-10-08 18:43:38.700345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.178 [2024-10-08 18:43:38.700547] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.178 [2024-10-08 18:43:38.700566] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.178 [2024-10-08 18:43:38.700579] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.178 [2024-10-08 18:43:38.703713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.437 [2024-10-08 18:43:38.713090] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.437 [2024-10-08 18:43:38.713498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.437 [2024-10-08 18:43:38.713524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.437 [2024-10-08 18:43:38.713560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.437 [2024-10-08 18:43:38.713833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.437 [2024-10-08 18:43:38.714079] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.437 [2024-10-08 18:43:38.714100] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.437 [2024-10-08 18:43:38.714112] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.437 [2024-10-08 18:43:38.717236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.437 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.437 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:10.437 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.437 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:10.437 [2024-10-08 18:43:38.726765] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.437 [2024-10-08 18:43:38.727199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.437 [2024-10-08 18:43:38.727240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.437 [2024-10-08 18:43:38.727256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.437 [2024-10-08 18:43:38.727462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.437 [2024-10-08 18:43:38.727699] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.437 [2024-10-08 18:43:38.727721] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.437 [2024-10-08 18:43:38.727734] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.437 [2024-10-08 18:43:38.730928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.437 [2024-10-08 18:43:38.740218] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.437 [2024-10-08 18:43:38.740739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.437 [2024-10-08 18:43:38.740779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.437 [2024-10-08 18:43:38.740798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.437 [2024-10-08 18:43:38.741045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.437 [2024-10-08 18:43:38.741270] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.437 [2024-10-08 18:43:38.741291] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.437 [2024-10-08 18:43:38.741308] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.437 [2024-10-08 18:43:38.744500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.438 Malloc0 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:10.438 [2024-10-08 18:43:38.753765] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.438 [2024-10-08 18:43:38.754187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.438 [2024-10-08 18:43:38.754224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.438 [2024-10-08 18:43:38.754241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.438 [2024-10-08 18:43:38.754456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.438 [2024-10-08 18:43:38.754684] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.438 [2024-10-08 18:43:38.754705] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.438 [2024-10-08 18:43:38.754720] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:10.438 [2024-10-08 18:43:38.757975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:10.438 [2024-10-08 18:43:38.767453] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.438 [2024-10-08 18:43:38.767911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:10.438 [2024-10-08 18:43:38.767940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e9100 with addr=10.0.0.2, port=4420 00:33:10.438 [2024-10-08 18:43:38.767972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e9100 is same with the state(6) to be set 00:33:10.438 [2024-10-08 18:43:38.768178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e9100 (9): Bad file descriptor 00:33:10.438 [2024-10-08 18:43:38.768388] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:10.438 [2024-10-08 18:43:38.768408] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:10.438 [2024-10-08 18:43:38.768428] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.438 [2024-10-08 18:43:38.768546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.438 [2024-10-08 18:43:38.771629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.438 18:43:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1337727 00:33:10.438 [2024-10-08 18:43:38.780922] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.438 [2024-10-08 18:43:38.817345] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:11.372 3297.14 IOPS, 12.88 MiB/s [2024-10-08T16:43:41.284Z] 3980.50 IOPS, 15.55 MiB/s [2024-10-08T16:43:42.217Z] 4499.00 IOPS, 17.57 MiB/s [2024-10-08T16:43:43.152Z] 4925.60 IOPS, 19.24 MiB/s [2024-10-08T16:43:44.087Z] 5278.91 IOPS, 20.62 MiB/s [2024-10-08T16:43:45.020Z] 5568.58 IOPS, 21.75 MiB/s [2024-10-08T16:43:45.954Z] 5819.46 IOPS, 22.73 MiB/s [2024-10-08T16:43:47.328Z] 6030.36 IOPS, 23.56 MiB/s [2024-10-08T16:43:47.328Z] 6217.33 IOPS, 24.29 MiB/s 00:33:18.791 Latency(us) 00:33:18.791 [2024-10-08T16:43:47.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.791 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:18.791 Verification LBA range: start 0x0 length 0x4000 00:33:18.791 Nvme1n1 : 15.02 6219.19 24.29 6438.08 0.00 10080.07 794.93 32816.55 00:33:18.791 [2024-10-08T16:43:47.328Z] =================================================================================================================== 00:33:18.791 [2024-10-08T16:43:47.328Z] Total : 6219.19 24.29 6438.08 0.00 10080.07 794.93 32816.55 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:18.791 rmmod nvme_tcp 00:33:18.791 rmmod nvme_fabrics 00:33:18.791 rmmod nvme_keyring 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 1338388 ']' 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 1338388 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1338388 ']' 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1338388 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:18.791 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1338388 00:33:19.049 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:19.049 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:19.049 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1338388' 00:33:19.049 killing process with pid 1338388 00:33:19.049 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1338388 00:33:19.049 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1338388 00:33:19.308 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:19.308 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:19.308 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:19.308 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:33:19.308 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:33:19.308 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:19.308 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:33:19.308 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:19.308 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:19.308 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.308 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:19.308 18:43:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.849 18:43:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:21.849 00:33:21.849 real 0m24.469s 00:33:21.849 user 1m2.356s 00:33:21.849 sys 0m5.618s 00:33:21.849 18:43:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:21.849 18:43:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:21.849 ************************************ 00:33:21.849 END TEST nvmf_bdevperf 00:33:21.849 ************************************ 00:33:21.849 18:43:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:21.849 18:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:21.849 18:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:21.849 18:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.849 ************************************ 00:33:21.849 START TEST nvmf_target_disconnect 00:33:21.849 ************************************ 00:33:21.849 18:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:21.849 * Looking for test storage... 00:33:21.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:21.849 18:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:21.849 18:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:21.849 18:43:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:21.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.849 --rc genhtml_branch_coverage=1 00:33:21.849 --rc genhtml_function_coverage=1 00:33:21.849 --rc genhtml_legend=1 00:33:21.849 --rc geninfo_all_blocks=1 00:33:21.849 --rc geninfo_unexecuted_blocks=1 00:33:21.849 00:33:21.849 ' 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:21.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.849 --rc genhtml_branch_coverage=1 00:33:21.849 --rc genhtml_function_coverage=1 00:33:21.849 --rc genhtml_legend=1 00:33:21.849 --rc geninfo_all_blocks=1 00:33:21.849 --rc geninfo_unexecuted_blocks=1 00:33:21.849 00:33:21.849 ' 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:21.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.849 --rc genhtml_branch_coverage=1 00:33:21.849 --rc genhtml_function_coverage=1 00:33:21.849 --rc genhtml_legend=1 00:33:21.849 --rc geninfo_all_blocks=1 00:33:21.849 --rc geninfo_unexecuted_blocks=1 00:33:21.849 00:33:21.849 ' 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:21.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.849 --rc genhtml_branch_coverage=1 00:33:21.849 --rc genhtml_function_coverage=1 00:33:21.849 --rc genhtml_legend=1 00:33:21.849 --rc geninfo_all_blocks=1 00:33:21.849 --rc geninfo_unexecuted_blocks=1 00:33:21.849 00:33:21.849 ' 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.849 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:21.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:33:21.850 18:43:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:25.141 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:25.141 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:33:25.141 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:25.141 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:25.141 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:25.141 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:25.141 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:25.141 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:33:25.141 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:25.141 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:33:25.141 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:33:25.141 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:25.142 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:25.142 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:25.142 Found net devices under 0000:84:00.0: cvl_0_0 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:25.142 Found net devices under 0000:84:00.1: cvl_0_1 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:25.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:33:25.142 00:33:25.142 --- 10.0.0.2 ping statistics --- 00:33:25.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.142 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:33:25.142 00:33:25.142 --- 10.0.0.1 ping statistics --- 00:33:25.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.142 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:25.142 ************************************ 00:33:25.142 START TEST nvmf_target_disconnect_tc1 00:33:25.142 ************************************ 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:25.142 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:25.143 [2024-10-08 18:43:53.639580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:25.143 [2024-10-08 18:43:53.639735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d2620 with addr=10.0.0.2, port=4420 00:33:25.143 [2024-10-08 18:43:53.639786] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:25.143 [2024-10-08 18:43:53.639813] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:25.143 [2024-10-08 18:43:53.639832] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:33:25.143 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:25.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:25.143 Initializing NVMe Controllers 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:25.143 00:33:25.143 real 0m0.203s 00:33:25.143 user 0m0.081s 00:33:25.143 sys 0m0.120s 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:25.143 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:25.143 ************************************ 00:33:25.143 END TEST nvmf_target_disconnect_tc1 00:33:25.143 ************************************ 00:33:25.402 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:25.403 ************************************ 00:33:25.403 START TEST nvmf_target_disconnect_tc2 00:33:25.403 ************************************ 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1341695 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1341695 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1341695 ']' 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:25.403 18:43:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:25.403 [2024-10-08 18:43:53.858139] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:33:25.403 [2024-10-08 18:43:53.858310] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:25.660 [2024-10-08 18:43:54.018404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:25.919 [2024-10-08 18:43:54.239502] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.919 [2024-10-08 18:43:54.239603] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.919 [2024-10-08 18:43:54.239641] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:25.919 [2024-10-08 18:43:54.239695] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:25.919 [2024-10-08 18:43:54.239725] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.919 [2024-10-08 18:43:54.243403] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:33:25.919 [2024-10-08 18:43:54.243482] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:33:25.919 [2024-10-08 18:43:54.243570] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:33:25.919 [2024-10-08 18:43:54.243579] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:33:25.919 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:25.919 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:25.919 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:25.919 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:25.919 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:25.919 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.919 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:25.919 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.919 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.180 Malloc0 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.180 [2024-10-08 18:43:54.468373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.180 [2024-10-08 18:43:54.497344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1341735 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:26.180 18:43:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:28.133 18:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1341695 00:33:28.133 18:43:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 [2024-10-08 18:43:56.523093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Write completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 Read completed with error (sct=0, sc=8) 00:33:28.133 starting I/O failed 00:33:28.133 [2024-10-08 18:43:56.523505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.133 [2024-10-08 18:43:56.523732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.133 [2024-10-08 18:43:56.523770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.133 qpair failed and we were unable to recover it. 00:33:28.133 [2024-10-08 18:43:56.523888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.133 [2024-10-08 18:43:56.523915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.133 qpair failed and we were unable to recover it. 00:33:28.133 [2024-10-08 18:43:56.524061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.133 [2024-10-08 18:43:56.524101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.133 qpair failed and we were unable to recover it. 00:33:28.133 [2024-10-08 18:43:56.524241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.133 [2024-10-08 18:43:56.524266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.133 qpair failed and we were unable to recover it. 00:33:28.133 [2024-10-08 18:43:56.524393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.133 [2024-10-08 18:43:56.524418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.133 qpair failed and we were unable to recover it. 00:33:28.133 [2024-10-08 18:43:56.524535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.133 [2024-10-08 18:43:56.524560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.133 qpair failed and we were unable to recover it. 00:33:28.133 [2024-10-08 18:43:56.524697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.133 [2024-10-08 18:43:56.524724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.133 qpair failed and we were unable to recover it. 00:33:28.133 [2024-10-08 18:43:56.524815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.133 [2024-10-08 18:43:56.524841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.133 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.524957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.524997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.525175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.525199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.525378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.525402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.525553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.525580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.525779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.525806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.525915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.525951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.526124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.526184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.526419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.526469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.526707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.526735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.526875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.526923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.527054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.527078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.527228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.527278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.527438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.527477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.527579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.527604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.527753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.527802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.527967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.527993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.528161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.528185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.528358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.528383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.528659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.528685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.528816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.528863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.529000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.529060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.529250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.529301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.529547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.529571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.529694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.529721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.529846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.529894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.530100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.530154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.530385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.530438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.530658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.530689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.530830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.530878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.531052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.531110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.531335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.531384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.531522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.531548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.531671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.531698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.531835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.531882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.532041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.532091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.532288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.532337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.532548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.532572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.532754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.134 [2024-10-08 18:43:56.532791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.134 qpair failed and we were unable to recover it. 00:33:28.134 [2024-10-08 18:43:56.533019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.533069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.533287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.533335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.533522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.533546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.533742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.533791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.533955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.534025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.534192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.534244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.534353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.534377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.534512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.534537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.534683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.534710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.534911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.534952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.535137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.535187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.535356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.535380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.535518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.535558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.535701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.535743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.535971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.536019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.536142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.536193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.536410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.536435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.536587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.536611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.536738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.536764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.537007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.537056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.537262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.537312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.537452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.537476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.537726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.537752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.537915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.537987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.538194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.538244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.538506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.538558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.538683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.538724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.538889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.538934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.539147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.539195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.539336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.539364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.539524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.539548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.539798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.539850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.539966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.540034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.540242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.540292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.540459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.540482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.540584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.540609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.540832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.540884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.541037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.541092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.135 [2024-10-08 18:43:56.541300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.135 [2024-10-08 18:43:56.541351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.135 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.541557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.541581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.541778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.541832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.542016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.542062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.542209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.542263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.542444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.542468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.542643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.542697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.542838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.542888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.543039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.543084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.543278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.543318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.543514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.543538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.543662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.543688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.543837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.543864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.543961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.543986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.544159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.544184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.544293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.544319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.544570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.544610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.544839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.544889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.545071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.545124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.545378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.545425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.545558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.545583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.545800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.545850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.545991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.546045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.546202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.546247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.546496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.546520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.546673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.546722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.546903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.546955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.547200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.547251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.547462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.547486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.547687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.547737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.547936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.547995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.548165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.548219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.548377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.548401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.548581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.548606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.548813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.548864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.548973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.549026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.549203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.549247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.549426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.549450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.549623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.549647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.136 [2024-10-08 18:43:56.549906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.136 [2024-10-08 18:43:56.549957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.136 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.550213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.550263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.550452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.550501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.550698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.550746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.550920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.550978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.551210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.551261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.551424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.551448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.551587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.551627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.551818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.551867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.552089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.552135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.552339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.552380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.552608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.552632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.552843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.552895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.553067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.553118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.553297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.553345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.553498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.553522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.553761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.553812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.554015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.554063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.554322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.554367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.554575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.554609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.554745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.554806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.554971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.555024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.555248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.555299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.555544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.555569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.555765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.555815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.556021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.556070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.556207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.556263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.556406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.556445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.556626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.556655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.556856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.556905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.557087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.557136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.557399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.557450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.137 [2024-10-08 18:43:56.557669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.137 [2024-10-08 18:43:56.557695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.137 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.557891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.557941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.558084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.558135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.558265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.558325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.558513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.558537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.558726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.558776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.559015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.559065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.559304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.559356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.559583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.559608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.559776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.559826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.560080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.560128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.560371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.560422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.560662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.560687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.560872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.560897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.561136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.561186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.561351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.561402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.561657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.561683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.561822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.561848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.561992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.562046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.562233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.562282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.562468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.562511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.562675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.562700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.562857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.562910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.563063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.563087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.563218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.563289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.563551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.563575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.563834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.563884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.564045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.564104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.564220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.564287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.564403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.564428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.564586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.564611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.564813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.564864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.565131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.565179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.565340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.565390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.565584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.565609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.565779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.565831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.566012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.566061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.566274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.566334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.566588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.566613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.566804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.138 [2024-10-08 18:43:56.566864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.138 qpair failed and we were unable to recover it. 00:33:28.138 [2024-10-08 18:43:56.567099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.567151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.567301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.567349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.567563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.567587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.567787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.567841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.568068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.568126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.568322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.568346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.568602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.568626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.568802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.568857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.569039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.569088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.569284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.569340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.569499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.569523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.569896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.569948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.570110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.570162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.570314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.570368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.570553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.570577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.570783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.570833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.570968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.571023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.571149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.571213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.571399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.571424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.571600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.571625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.571800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.571850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.572043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.572092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.572223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.572282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.572447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.572472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.572624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.572670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.572875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.572927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.573084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.573135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.573290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.573346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.573521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.573545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.573759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.573810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.573945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.573970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.574147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.574172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.574350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.574374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.574530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.574555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.574716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.574781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.574912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.574976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.575163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.575214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.575382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.575406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.575546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.575585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.575763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.139 [2024-10-08 18:43:56.575825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.139 qpair failed and we were unable to recover it. 00:33:28.139 [2024-10-08 18:43:56.576013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.576063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.576246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.576297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.576441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.576466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.576604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.576629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.576765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.576823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.576978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.577028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.577188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.577239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.577390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.577415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.577623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.577675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.577829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.577855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.577948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.577973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.578084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.578109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.578257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.578291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.578491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.578516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.578750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.578776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.578917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.578957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.579083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.579108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.579286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.579310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.579469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.579509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.579718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.579743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.579877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.579902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.580096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.580121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.580266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.580290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.580481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.580505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.580662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.580687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.580873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.580899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.581060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.581115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.581278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.581325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.581432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.581457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.581600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.581625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.581811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.581853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.581993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.582017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.582202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.582227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.582424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.582449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.582616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.582662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.582829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.582880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.583070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.583123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.583296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.583346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.583450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.140 [2024-10-08 18:43:56.583491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.140 qpair failed and we were unable to recover it. 00:33:28.140 [2024-10-08 18:43:56.583647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.583716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.583869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.583925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.584085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.584120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.584231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.584258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.584451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.584492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.584628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.584659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.584785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.584815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.584952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.584978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.585146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.585171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.585314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.585340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.585494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.585535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.585703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.585744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.585877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.585903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.586032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.586058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.586179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.586220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.586392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.586418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.586623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.586668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.586797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.586823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.586923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.586949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.587093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.587119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.587248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.587274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.587404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.587439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.587578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.587619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.587764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.587791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.587919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.587945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.588072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.588098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.588246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.588272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.588420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.588459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.588593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.588637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.588833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.588858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.588979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.589018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.589208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.589233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.589423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.589448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.589566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.589607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.589815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.589866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.590059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.590102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.590303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.590353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.590504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.590529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.590723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.590775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.141 qpair failed and we were unable to recover it. 00:33:28.141 [2024-10-08 18:43:56.590948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.141 [2024-10-08 18:43:56.590998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.591219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.591271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.591412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.591437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.591586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.591611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.591785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.591812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.591908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.591933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.592082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.592109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.592246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.592287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.592454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.592480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.592596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.592622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.592785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.592812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.592963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.593004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.593132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.593173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.593340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.593381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.593549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.593574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.593693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.593731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.593867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.593926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.594115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.594158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.594298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.594323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.594485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.594525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.594669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.594696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.594865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.594931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.595154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.595205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.595341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.595366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.595501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.595527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.595706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.595768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.595997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.596047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.596269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.596322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.596524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.596549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.596741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.596797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.596966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.597031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.597233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.597283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.597492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.597517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.142 [2024-10-08 18:43:56.597762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.142 [2024-10-08 18:43:56.597816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.142 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.597940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.598007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.598253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.598279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.598400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.598425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.598616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.598662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.598863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.598907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.599102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.599154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.599322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.599380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.599529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.599554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.599658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.599684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.599839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.599889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.600059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.600103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.600284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.600336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.600496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.600521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.600675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.600701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.600855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.600881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.601072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.601122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.601303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.601354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.601559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.601585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.601797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.601862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.602017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.602069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.602213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.602265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.602438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.602464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.602576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.602603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.602843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.602896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.603121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.603171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.603300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.603351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.603531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.603556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.603725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.603779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.603954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.604007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.604215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.604267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.604423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.604448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.604598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.604638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.604843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.604893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.605138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.605188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.605369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.605413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.605609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.605658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.605778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.605834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.605969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.143 [2024-10-08 18:43:56.606016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.143 qpair failed and we were unable to recover it. 00:33:28.143 [2024-10-08 18:43:56.606198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.606250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.606480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.606506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.606676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.606703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.606834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.606882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.607057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.607111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.607233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.607258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.607357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.607382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.607635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.607668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.607800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.607826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.607976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.608000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.608136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.608162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.608314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.608340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.608431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.608455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.608573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.608598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.608779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.608805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.608965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.608989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.609150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.609175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.609289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.609315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.609407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.609432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.609547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.609572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.609724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.609776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.609966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.610014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.610270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.610323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.610526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.610551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.610653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.610695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.610873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.610930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.611127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.611176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.611424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.611476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.611606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.611631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.611786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.611837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.612003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.612056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.612196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.612243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.612393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.612419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.612512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.612537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.612624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.612654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.612775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.612800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.612967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.612993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.613115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.613144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.144 [2024-10-08 18:43:56.613290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.144 [2024-10-08 18:43:56.613315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.144 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.613489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.613513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.613646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.613678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.613819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.613844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.614012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.614052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.614196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.614220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.614385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.614425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.614599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.614623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.614803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.614850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.615067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.615117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.615337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.615387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.615534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.615557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.615762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.615815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.616056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.616108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.616241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.616265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.616408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.616434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.616577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.616616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.616751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.616816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.617017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.617065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.617324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.617372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.617532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.617555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.617764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.617814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.617945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.618005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.618239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.618289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.618490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.618514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.618691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.618780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.618901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.618954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.619134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.619157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.619387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.619438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.619579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.619603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.619821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.619872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.620025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.620091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.620263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.620323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.620464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.620503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.620643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.620693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.620832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.620872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.621007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.621031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.621183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.621208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.621350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.621374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.145 qpair failed and we were unable to recover it. 00:33:28.145 [2024-10-08 18:43:56.621492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.145 [2024-10-08 18:43:56.621521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.621669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.621710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.621855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.621916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.622128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.622178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.622328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.622352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.622538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.622579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.622744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.622793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.622951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.623000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.623157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.623209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.623396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.623421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.623575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.623600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.623773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.623823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.624028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.624080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.624237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.624292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.624487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.624512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.624608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.624633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.624807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.624861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.625112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.625162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.625353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.625404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.625579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.625602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.625730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.625756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.626023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.626077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.626240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.626288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.626498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.626523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.626755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.626804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.626935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.626960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.627135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.627188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.627390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.627414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.627685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.627711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.627860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.627911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.628070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.628123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.628281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.628331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.628503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.628527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.628715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.628773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.146 [2024-10-08 18:43:56.628912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.146 [2024-10-08 18:43:56.628972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.146 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.629155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.629203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.629391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.629414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.629595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.629646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.629909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.629964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.630162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.630212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.630384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.630434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.630581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.630606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.630830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.630871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.631003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.631065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.631235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.631275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.631460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.631485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.631597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.631636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.631807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.631857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.632026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.632050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.632231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.632282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.632474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.632498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.632709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.632740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.632917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.632969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.633145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.633197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.633370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.633395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.633579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.633603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.633789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.633851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.634066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.634116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.634265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.634318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.634471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.634496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.634638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.634687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.634867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.634917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.635101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.635144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.635367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.635418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.635532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.635556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.635716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.635776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.636014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.636063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.636227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.636280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.636438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.636472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.636672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.636724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.636924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.636975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.637135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.637186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.637386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.637438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.637585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.637609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.147 [2024-10-08 18:43:56.637812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.147 [2024-10-08 18:43:56.637863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.147 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.638058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.638107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.638258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.638307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.638472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.638496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.638647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.638702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.638892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.638950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.639129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.639183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.639327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.639378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.639558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.639590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.639700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.639726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.640016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.640068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.640252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.640302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.640442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.640467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.640602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.640635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.640814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.640870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.641056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.641104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.641259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.641301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.641410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.641435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.641572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.641597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.641734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.641760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.641933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.641974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.642192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.642216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.642356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.642380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.642545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.642584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.642739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.642797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.642969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.643019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.643170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.643220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.643439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.643464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.643664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.643689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.643940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.643990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.644172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.644220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.644489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.644540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.644682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.644724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.644924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.644976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.645212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.645261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.645437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.645461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.645657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.645697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.645855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.645905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.646097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.646146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.148 qpair failed and we were unable to recover it. 00:33:28.148 [2024-10-08 18:43:56.646302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.148 [2024-10-08 18:43:56.646352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.646506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.646530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.646713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.646783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.647052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.647102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.647233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.647283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.647395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.647420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.647592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.647617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.647739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.647767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.647898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.647940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.648079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.648103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.648237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.648262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.648407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.648440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.648631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.648669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.648786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.648811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.649055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.649079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.649255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.649278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.649451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.649475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.649606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.649644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.649804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.649856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.650069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.650120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.650274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.650325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.650483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.650507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.650611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.650659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.650774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.650833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.651022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.651047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.651294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.651318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.651514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.651538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.651743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.651795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.651984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.652033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.652207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.652251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.652418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.652442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.652596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.652634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.652897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.652946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.653144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.653184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.653359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.653393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.653551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.653575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.653817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.653871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.654069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.654119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.149 [2024-10-08 18:43:56.654343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.149 [2024-10-08 18:43:56.654395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.149 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.654569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.654594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.654854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.654905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.655125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.655170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.655347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.655398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.655533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.655558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.655755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.655822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.655981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.656046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.656328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.656381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.656552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.656581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.656784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.656835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.656980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.657034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.657192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.657244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.657378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.657417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.657557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.657582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.657752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.657813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.658011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.658036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.658249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.658274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.658398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.658422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.658601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.658641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.658797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.658822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.658985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.659009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.659154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.659193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.659378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.659403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.659537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.659577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.659787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.659838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.660111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.150 [2024-10-08 18:43:56.660161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.150 qpair failed and we were unable to recover it. 00:33:28.150 [2024-10-08 18:43:56.660367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.660416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.660539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.660565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.660835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.660890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.661051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.661101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.661328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.661376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.661594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.661618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.661791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.661853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.662043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.662084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.662281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.662329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.662458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.662484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.662619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.662646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.662836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.662896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.663117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.663170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.663387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.663436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.663570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.663595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.663774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.663826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.664071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.664096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.664310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.664360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.664538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.664562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.664698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.664739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.664921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.426 [2024-10-08 18:43:56.664972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.426 qpair failed and we were unable to recover it. 00:33:28.426 [2024-10-08 18:43:56.665163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.665226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.665408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.665437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.665556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.665581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.665803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.665850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.665993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.666049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.666247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.666298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.666500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.666525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.666784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.666835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.667009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.667059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.667217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.667267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.667425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.667449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.667669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.667709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.667945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.667996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.668243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.668291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.668486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.668510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.668722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.668776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.669027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.669075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.669273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.669324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.669469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.669493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.669730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.669782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.669971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.670022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.670258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.670309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.670528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.670555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.670759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.670812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.670988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.671039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.671189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.671238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.671486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.671512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.671640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.671673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.671842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.671903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.672060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.672085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.672249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.672275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.672454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.672480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.672616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.672642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.672850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.672899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.673034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.673084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.673304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.673361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.673514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.673539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.673714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.673776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.673956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.674004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.427 qpair failed and we were unable to recover it. 00:33:28.427 [2024-10-08 18:43:56.674194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.427 [2024-10-08 18:43:56.674248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.674421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.674447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.674631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.674662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.674852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.674907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.675141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.675177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.675399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.675477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.675748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.675800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.675963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.676015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.676236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.676297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.676544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.676605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.676747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.676774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.676921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.676985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.677223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.677275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.677455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.677500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.677723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.677750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.677908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.677963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.678192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.678218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.678439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.678464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.678694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.678720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.678860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.678914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.679023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.679083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.679244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.679294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.679463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.679488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.679659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.679685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.679817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.679843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.679995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.680045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.680274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.680324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.680475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.680500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.680642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.680704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.680939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.680990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.681179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.681231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.681462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.681515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.681680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.681707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.681890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.681933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.682050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.682106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.682312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.682368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.682605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.682630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.682824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.682883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.683086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.428 [2024-10-08 18:43:56.683140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.428 qpair failed and we were unable to recover it. 00:33:28.428 [2024-10-08 18:43:56.683422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.683475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.683666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.683691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.683834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.683860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.683989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.684047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.684227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.684279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.684422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.684473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.684628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.684662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.684810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.684864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.685058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.685110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.685314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.685367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.685527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.685553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.685678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.685714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.685872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.685928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.686111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.686158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.686298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.686357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.686491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.686518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.686674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.686700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.686874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.686927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.687115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.687166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.687308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.687334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.687532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.687558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.687818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.687871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.688052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.688104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.688286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.688341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.688505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.688531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.688748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.688798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.689044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.689097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.689309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.689353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.689586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.689629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.689779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.689807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.689915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.689948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.690095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.690161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.690374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.690440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.690744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.690770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.690937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.690998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.691187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.691238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.691367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.691424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.691547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.691573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.691750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.691799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.429 [2024-10-08 18:43:56.692017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.429 [2024-10-08 18:43:56.692085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.429 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.692251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.692302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.692420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.692446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.692580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.692615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.692822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.692848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.693053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.693081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.693263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.693315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.693493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.693519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.693754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.693806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.694014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.694067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.694219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.694269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.694428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.694454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.694612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.694638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.694837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.694889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.695034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.695088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.695242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.695296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.695398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.695424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.695603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.695628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.695790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.695817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.695964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.696019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.696158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.696184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.696303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.696329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.696441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.696467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.696676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.696702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.696875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.696940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.697200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.697261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.697435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.697486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.697722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.697749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.697928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.697993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.698232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.698283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.698534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.698584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.698741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.698773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.698947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.699006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.699249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.699300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.430 qpair failed and we were unable to recover it. 00:33:28.430 [2024-10-08 18:43:56.699532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.430 [2024-10-08 18:43:56.699558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.699767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.699820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.700020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.700077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.700225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.700277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.700466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.700493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.700716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.700743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.700994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.701046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.701231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.701282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.701394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.701459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.701619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.701645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.701855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.701909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.702110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.702166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.702339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.702391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.702575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.702602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.702751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.702808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.702946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.702997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.703185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.703230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.703453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.703505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.703708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.703774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.703956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.704019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.704150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.704203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.704382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.704408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.704597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.704624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.704819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.704875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.705086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.705148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.705359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.705412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.705568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.705594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.705754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.705818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.706036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.706087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.706270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.706337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.706573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.706600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.706779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.706832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.707084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.707136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.707336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.707387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.707534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.707559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.707816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.707869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.708012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.708064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.708243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.708299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.708443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.708470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.708594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.431 [2024-10-08 18:43:56.708628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.431 qpair failed and we were unable to recover it. 00:33:28.431 [2024-10-08 18:43:56.708817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.708870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.708999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.709049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.709213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.709264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.709448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.709474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.709655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.709684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.709827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.709891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.710130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.710157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.710358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.710408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.710633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.710672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.710811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.710841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.711038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.711093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.711299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.711351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.711518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.711544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.711672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.711698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.711842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.711894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.712109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.712154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.712325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.712378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.712573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.712608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.712791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.712837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.713018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.713066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.713229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.713277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.713509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.713535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.713732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.713784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.713928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.713955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.714074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.714101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.714252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.714288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.714531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.714558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.714731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.714758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.714927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.714953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.715096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.715121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.715274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.715299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.715427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.715453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.715717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.715777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.715941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.715967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.716081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.716136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.716359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.716416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.716645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.716676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.716855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.716912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.717157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.717215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.432 [2024-10-08 18:43:56.717401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.432 [2024-10-08 18:43:56.717447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.432 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.717660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.717687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.717869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.717895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.718079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.718131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.718345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.718396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.718531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.718558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.718774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.718821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.719008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.719058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.719187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.719243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.719494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.719545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.719661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.719688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.719862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.719918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.720141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.720167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.720329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.720386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.720508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.720534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.720759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.720786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.720910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.720936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.721125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.721152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.721295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.721321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.721521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.721547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.721678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.721704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.721891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.721947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.722181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.722238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.722391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.722417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.722638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.722671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.722849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.722902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.723105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.723173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.723373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.723424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.723598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.723625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.723881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.723944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.724205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.724258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.724414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.724466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.724618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.724644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.724820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.724870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.725070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.725122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.725244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.725300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.725458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.725484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.725683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.725710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.725853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.725909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.726080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.433 [2024-10-08 18:43:56.726127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.433 qpair failed and we were unable to recover it. 00:33:28.433 [2024-10-08 18:43:56.726273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.726324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.726461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.726490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.726645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.726686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.726938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.726990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.727134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.727187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.727408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.727458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.727610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.727636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.727816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.727843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.728049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.728108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.728299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.728351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.728543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.728569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.728695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.728722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.728895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.728956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.729136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.729186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.729398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.729452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.729635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.729669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.729801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.729827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.729996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.730046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.730232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.730282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.730454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.730480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.730637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.730680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.730842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.730868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.731110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.731162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.731356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.731406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.731616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.731642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.731781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.731808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.732017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.732071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.732277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.732327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.732502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.732528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.732714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.732780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.733030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.733083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.733259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.733314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.733509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.733536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.733847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.733896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.734028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.734080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.734268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.734320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.734521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.734547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.734632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.734664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.734811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.734869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.434 [2024-10-08 18:43:56.735011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.434 [2024-10-08 18:43:56.735061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.434 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.735256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.735300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.735516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.735542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.735783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.735836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.736076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.736128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.736240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.736297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.736477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.736503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.736794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.736847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.737081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.737126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.737342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.737392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.737621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.737647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.737909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.737971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.738088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.738141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.738412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.738465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.738682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.738710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.738880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.738907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.739099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.739149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.739408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.739458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.739613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.739639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.739736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.739762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.739893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.739956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.740073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.740129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.740301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.740356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.740503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.740542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.740737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.740791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.740907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.740933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.741149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.741175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.741309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.741335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.741483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.741510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.741644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.741676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.741797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.741824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.742036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.742065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.742242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.742268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.742424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.742450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.742636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.742674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.742821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.435 [2024-10-08 18:43:56.742877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.435 qpair failed and we were unable to recover it. 00:33:28.435 [2024-10-08 18:43:56.743010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.743067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.743186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.743211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.743395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.743422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.743610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.743637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.743873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.743921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.744161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.744215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.744355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.744409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.744588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.744614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.744818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.744871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.745052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.745106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.745310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.745361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.745601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.745627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.745752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.745778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.745964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.746012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.746173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.746219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.746403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.746453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.746648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.746681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.746875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.746901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.747155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.747206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.747410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.747464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.747592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.747618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.747793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.747820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.748024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.748074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.748253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.748307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.748520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.748546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.748734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.748761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.748899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.748949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.749127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.749178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.749352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.749406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.749566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.749592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.749753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.749805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.749980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.750032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.750181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.750231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.750399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.750423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.750598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.750623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.750780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.750831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.750991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.751040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.751181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.751228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.751353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.436 [2024-10-08 18:43:56.751378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.436 qpair failed and we were unable to recover it. 00:33:28.436 [2024-10-08 18:43:56.751523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.751550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.751721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.751747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.751870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9205f0 is same with the state(6) to be set 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 [2024-10-08 18:43:56.752285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Write completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 Read completed with error (sct=0, sc=8) 00:33:28.437 starting I/O failed 00:33:28.437 [2024-10-08 18:43:56.752611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:28.437 [2024-10-08 18:43:56.752836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.752876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.753011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.753037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.753194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.753220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.753392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.753418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.753581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.753607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.753743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.753769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.753924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.753994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.754144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.754192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.754335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.754385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.754554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.754580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.754753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.754799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.754920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.754989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.755137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.755190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.755330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.755383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.755523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.755548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.755719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.437 [2024-10-08 18:43:56.755783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.437 qpair failed and we were unable to recover it. 00:33:28.437 [2024-10-08 18:43:56.755954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.755980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.756104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.756128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.756262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.756288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.756423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.756448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.756645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.756678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.756806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.756832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.756975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.757001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.757131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.757155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.757324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.757374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.757472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.757497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.757682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.757709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.757859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.757911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.758078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.758130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.758290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.758315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.758447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.758472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.758616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.758642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.758771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.758824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.758964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.759014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.759113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.759137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.759268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.759293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.759472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.759496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.759630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.759660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.759875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.759902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.760027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.760052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.760305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.760331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.760454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.760479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.760681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.760709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.760839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.760866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.761100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.761125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.761338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.761387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.761525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.761550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.761716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.761772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.762018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.762080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.762233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.762284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.762468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.762494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.762723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.762775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.762993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.763043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.763235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.763287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.763422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.763451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.763573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.438 [2024-10-08 18:43:56.763602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.438 qpair failed and we were unable to recover it. 00:33:28.438 [2024-10-08 18:43:56.763874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.763927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.764082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.764136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.764296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.764345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.764570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.764595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.764814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.764859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.765119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.765169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.765341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.765392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.765496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.765520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.765723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.765775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.765906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.765961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.766187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.766238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.766456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.766482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.766738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.766764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.766947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.766994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.767244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.767297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.767525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.767566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.767786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.767840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.768064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.768115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.768310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.768358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.768552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.768577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.768797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.768848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.769094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.769143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.769290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.769341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.769535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.769559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.769812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.769863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.769989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.770051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.770278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.770327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.770509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.770533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.770747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.770799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.770998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.771048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.771292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.771341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.771573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.771597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.771839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.771889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.772081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.772127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.772285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.772337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.772552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.439 [2024-10-08 18:43:56.772577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.439 qpair failed and we were unable to recover it. 00:33:28.439 [2024-10-08 18:43:56.772733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.772785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.772960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.773008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.773200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.773253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.773493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.773522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.773694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.773721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.773869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.773922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.774187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.774239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.774424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.774451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.774591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.774616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.774832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.774889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.775092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.775157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.775374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.775420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.775580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.775606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.775767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.775821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.776031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.776081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.776327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.776377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.776599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.776625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.776794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.776829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.777056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.777105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.777355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.777404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.777623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.777655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.777813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.777840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.778067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.778119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.778365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.778415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.778597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.778624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.778766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.778793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.778945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.779004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.779195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.779247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.779487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.779538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.779679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.779705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.779920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.779971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.780209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.780262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.780451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.780496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.780727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.780780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.781027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.781079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.781380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.781440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.781597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.781623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.781869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.781921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.782145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.782197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.440 [2024-10-08 18:43:56.782374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.440 [2024-10-08 18:43:56.782427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.440 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.782626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.782657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.782895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.782921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.783072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.783134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.783321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.783376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.783544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.783584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.783796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.783850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.784001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.784055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.784195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.784249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.784425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.784478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.784697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.784724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.784969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.785020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.785204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.785257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.785465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.785522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.785763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.785826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.786114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.786184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.786456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.786508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.786764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.786815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.786980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.787032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.787237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.787287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.787440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.787466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.787608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.787633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.787834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.787889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.788116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.788158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.788314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.788359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.788512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.788538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.788661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.788687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.788923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.788981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.789247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.789298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.789484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.789509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.789723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.789750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.789973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.790025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.790177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.790224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.790419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.790469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.790623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.790668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.790917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.790982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.791200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.791243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.791409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.791458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.791638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.791683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.791908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.791959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.441 [2024-10-08 18:43:56.792195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.441 [2024-10-08 18:43:56.792246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.441 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.792396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.792420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.792554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.792579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.792754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.792805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.792973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.793025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.793157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.793181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.793363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.793403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.793664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.793690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.793916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.793956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.794099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.794123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.794362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.794387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.794662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.794688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.794876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.794901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.795058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.795107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.795272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.795324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.795584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.795608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.795755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.795780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.795956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.796007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.796228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.796278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.796407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.796465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.796643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.796674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.796910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.796959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.797225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.797274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.797472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.797513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.797752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.797823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.798086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.798137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.798310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.798361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.798525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.798549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.798788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.798841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.799042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.799083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.799276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.799327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.799472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.799495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.799710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.799759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.799941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.800001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.800141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.800194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.800350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.800373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.800615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.800667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.800933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.800998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.801200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.801251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.801488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.442 [2024-10-08 18:43:56.801540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.442 qpair failed and we were unable to recover it. 00:33:28.442 [2024-10-08 18:43:56.801718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.801788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.801938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.801996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.802173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.802217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.802446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.802471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.802610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.802658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.802807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.802861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.803044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.803068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.803260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.803285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.803447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.803471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.803717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.803744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.803895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.803946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.804052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.804105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.804314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.804365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.804544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.804569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.804773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.804824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.805126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.805175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.805402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.805451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.805657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.805682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.805946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.805972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.806164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.806210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.806407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.806456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.806626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.806673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.806911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.806963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.807139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.807189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.807404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.807455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.807723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.807787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.807979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.808028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.808218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.808271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.808530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.808582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.808804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.808831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.809037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.809082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.809291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.809341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.809482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.809507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.809662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.809689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.809935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.809989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.810143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.810189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.810398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.443 [2024-10-08 18:43:56.810450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.443 qpair failed and we were unable to recover it. 00:33:28.443 [2024-10-08 18:43:56.810694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.810719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.810927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.810985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.811195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.811243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.811391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.811441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.811621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.811645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.811860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.811906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.812062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.812123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.812386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.812441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.812591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.812616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.812797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.812824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.813091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.813134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.813365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.813415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.813663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.813689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.813813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.813853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.814020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.814074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.814257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.814308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.814490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.814539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.814636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.814682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.814924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.814983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.815208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.815259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.815424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.815475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.815688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.815715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.815917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.815972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.816220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.816269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.816476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.816527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.816794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.816851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.817088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.817140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.817385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.817436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.817581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.817606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.817798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.817849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.818091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.818142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.818397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.818448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.818657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.818697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.818888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.818914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.819117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.819169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.819375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.819427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.819594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.819619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.819739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.819765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.444 [2024-10-08 18:43:56.820001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.444 [2024-10-08 18:43:56.820050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.444 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.820294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.820343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.820495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.820519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.820748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.820819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.820954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.821004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.821121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.821174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.821315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.821357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.821614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.821639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.821794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.821819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.822019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.822047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.822224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.822247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.822400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.822423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.822606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.822631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.822859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.822909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.823090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.823140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.823289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.823341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.823530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.823554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.823807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.823859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.824074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.824122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.824301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.824353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.824560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.824584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.824709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.824735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.824989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.825038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.825219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.825270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.825422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.825471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.825714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.825740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.825931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.825982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.826187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.826238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.826466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.826515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.826671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.826695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.826917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.826969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.827181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.827230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.827412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.827463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.827687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.827728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.827970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.828021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.828235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.828284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.828517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.828565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.828680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.828726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.828916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.828965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.829158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.829208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.829418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.445 [2024-10-08 18:43:56.829469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.445 qpair failed and we were unable to recover it. 00:33:28.445 [2024-10-08 18:43:56.829703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.829728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.829981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.830029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.830179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.830227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.830451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.830503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.830719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.830787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.831034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.831077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.831313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.831363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.831545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.831569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.831778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.831833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.831923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.832012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.832208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.832257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.832513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.832537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.832744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.832813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.832999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.833051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.833308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.833358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.833539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.833564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.833789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.833841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.834085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.834140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.834368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.834419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.834622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.834647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.834904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.834953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.835080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.835104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.835274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.835329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.835496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.835520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.835775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.835826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.836054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.836105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.836258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.836311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.836485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.836509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.836646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.836691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.836829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.836868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.837022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.837061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.837242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.837266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.837445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.837469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.837623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.837647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.837861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.837914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.838103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.838154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.838334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.838385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.838599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.838624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.838893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.838953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.839214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.446 [2024-10-08 18:43:56.839261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.446 qpair failed and we were unable to recover it. 00:33:28.446 [2024-10-08 18:43:56.839512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.839561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.839749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.839777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.840019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.840067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.840220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.840268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.840456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.840507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.840696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.840721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.840987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.841045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.841184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.841237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.841493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.841548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.841756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.841821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.842039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.842064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.842226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.842282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.842414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.842437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.842624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.842668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.842835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.842867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.843111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.843160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.843422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.843471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.843706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.843731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.843898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.843954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.844198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.844249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.844476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.844525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.844746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.844772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.844967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.845017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.845256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.845307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.845517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.845541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.845679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.845724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.845899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.845956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.846109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.846160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.846315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.846367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.846559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.846583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.846823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.846875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.847027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.847093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.847270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.847328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.847528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.847553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.847806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.847859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.848056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.848108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.848328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.848377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.848576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.848600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.848781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.848846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.848985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.447 [2024-10-08 18:43:56.849038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.447 qpair failed and we were unable to recover it. 00:33:28.447 [2024-10-08 18:43:56.849227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.849277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.849509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.849532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.849671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.849711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.849977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.850030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.850193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.850245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.850391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.850426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.850619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.850644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.850800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.850850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.851040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.851095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.851340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.851390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.851574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.851597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.851837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.851887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.852082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.852123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.852345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.852387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.852589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.852613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.852754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.852795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.853037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.853086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.853271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.853312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.853514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.853538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.853747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.853807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.853961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.854010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.854228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.854277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.854530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.854554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.854762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.854830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.855025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.855065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.855329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.855381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.855533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.855558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.855761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.855814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.856060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.856108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.856353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.856405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.856593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.856618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.856889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.856941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.857137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.857188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.857448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.857498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.857679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.857705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.448 [2024-10-08 18:43:56.857897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.448 [2024-10-08 18:43:56.857953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.448 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.858200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.858249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.858470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.858518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.858767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.858793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.859035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.859080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.859273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.859334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.859562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.859586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.859794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.859844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.860000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.860053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.860245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.860298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.860485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.860510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.860685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.860710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.860836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.860898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.861152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.861204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.861417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.861469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.861637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.861682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.861887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.861938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.862111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.862161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.862365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.862413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.862554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.862579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.862775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.862828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.862981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.863029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.863215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.863263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.863468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.863492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.863715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.863768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.863922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.863975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.864220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.864269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.864440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.864465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.864696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.864737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.864932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.864982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.865150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.865202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.865351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.865401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.865607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.865646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.865862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.865914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.866182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.866231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.866399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.866449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.866637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.866682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.866871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.866897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.867107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.867154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.867408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.867459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.867717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.449 [2024-10-08 18:43:56.867744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.449 qpair failed and we were unable to recover it. 00:33:28.449 [2024-10-08 18:43:56.867841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.867865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.868066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.868108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.868341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.868392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.868611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.868635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.868794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.868821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.869072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.869122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.869330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.869380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.869508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.869533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.869775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.869827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.870039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.870094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.870298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.870347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.870568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.870592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.870803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.870859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.871121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.871170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.871307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.871356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.871546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.871578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.871789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.871840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.871991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.872049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.872266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.872318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.872515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.872539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.872801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.872850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.872981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.873070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.873189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.873219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.873343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.873368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.873585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.873609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.873775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.873825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.874059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.874115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.874316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.874375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.874580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.874604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.874835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.874886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.875050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.875105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.875358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.875410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.875584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.875608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.875886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.875940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.876122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.876163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.876364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.876414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.876601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.876625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.876871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.876921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.877094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.877137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.877329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.450 [2024-10-08 18:43:56.877380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.450 qpair failed and we were unable to recover it. 00:33:28.450 [2024-10-08 18:43:56.877565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.877589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.877823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.877877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.878045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.878095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.878359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.878410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.878635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.878680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.878930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.878954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.879099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.879153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.879345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.879396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.879576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.879601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.879759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.879785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.879980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.880028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.880249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.880298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.880433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.880457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.880703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.880730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.880922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.880972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.881162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.881215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.881415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.881464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.881684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.881710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.881881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.881932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.882077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.882128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.882361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.882414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.882589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.882612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.882807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.882831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.882979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.883029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.883216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.883267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.883445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.883469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.883646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.883690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.883883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.883932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.884136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.884187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.884443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.884492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.884676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.884701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.884861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.884912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.885143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.885194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.885345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.885394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.885629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.885675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.885801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.885825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.886013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.886063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.886225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.886277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.886469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.886520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.886784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.451 [2024-10-08 18:43:56.886849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.451 qpair failed and we were unable to recover it. 00:33:28.451 [2024-10-08 18:43:56.887054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.887102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.887305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.887347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.887521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.887545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.887803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.887853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.888103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.888152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.888330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.888372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.888490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.888513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.888740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.888792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.888934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.888980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.889186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.889210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.889374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.889398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.889601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.889625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.889793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.889818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.889988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.890028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.890265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.890288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.890513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.890537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.890678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.890741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.890953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.891006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.891250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.891301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.891485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.891517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.891748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.891801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.891983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.892034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.892239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.892282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.892465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.892489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.892621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.892689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.892900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.892951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.893188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.893241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.893479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.893503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.893716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.893803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.893955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.894014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.894219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.894268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.894392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.894416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.894606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.894644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.894796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.894866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.894998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.895086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.895190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.895230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.452 [2024-10-08 18:43:56.895416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.452 [2024-10-08 18:43:56.895455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.452 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.895590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.895614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.895786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.895837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.896080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.896108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.896290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.896340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.896531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.896555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.896787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.896837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.897018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.897071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.897303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.897353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.897546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.897571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.897786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.897836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.898048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.898096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.898353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.898402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.898556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.898580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.898718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.898776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.898965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.899014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.899224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.899272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.899489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.899513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.899660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.899700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.899872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.899921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.900153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.900203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.900438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.900488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.900617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.900662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.900911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.900981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.901219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.901271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.901421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.901473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.901728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.901754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.901935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.901985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.902138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.902182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.902406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.902456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.902653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.902694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.902867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.902892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.903057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.903101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.903295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.903339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.903557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.903592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.903780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.903815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.904061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.904112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.904267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.904318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.904473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.904502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.904720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.453 [2024-10-08 18:43:56.904745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.453 qpair failed and we were unable to recover it. 00:33:28.453 [2024-10-08 18:43:56.904871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.904894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.905038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.905077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.905215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.905254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.905397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.905441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.905629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.905660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.905876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.905902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.906146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.906170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.906352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.906376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.906525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.906549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.906737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.906798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.906984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.907034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.907189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.907240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.907462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.907487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.907725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.907752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.907910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.907936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.908088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.908150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.908272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.908340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.908542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.908567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.908771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.908820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.909049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.909099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.909349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.909400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.909654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.909680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.909868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.909919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.910139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.910191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.910432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.910479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.910719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.910746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.911033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.911081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.911284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.911338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.911625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.911671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.911821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.911848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.912034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.912081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.912312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.912364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.912498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.912522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.912722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.912776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.912961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.913011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.913247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.913298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.913447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.913476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.913639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.913683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.913838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.913864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.913998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.454 [2024-10-08 18:43:56.914039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.454 qpair failed and we were unable to recover it. 00:33:28.454 [2024-10-08 18:43:56.914298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.914322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.914500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.914525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.914692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.914718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.914849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.914878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.915076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.915101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.915276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.915330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.915491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.915516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.915704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.915731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.915975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.916014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.916187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.916235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.916478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.916503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.916636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.916680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.916829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.916856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.916998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.917023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.917207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.917231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.917392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.917432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.917657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.917682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.917881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.917933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.918177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.918228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.918460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.918509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.918737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.918793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.919076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.919126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.919349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.919400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.919583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.919607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.919823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.919866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.920110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.920159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.920323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.920375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.920545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.920570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.920786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.920838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.920991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.921040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.921288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.921340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.921490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.921515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.921733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.921793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.922022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.922072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.922260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.922312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.922538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.922562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.922848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.922897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.923098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.923150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.923378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.923431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.923656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.923683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.455 [2024-10-08 18:43:56.923921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.455 [2024-10-08 18:43:56.923962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.455 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.924203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.924252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.924425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.924476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.924611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.924638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.924814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.924855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.925101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.925152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.925325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.925373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.925512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.925535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.925669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.925694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.925892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.925954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.926151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.926198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.926411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.926463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.926694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.926719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.926886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.926933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.927147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.927196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.927449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.927498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.927693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.927718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.927903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.927954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.928172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.928223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.928433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.928481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.928687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.928713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.928877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.928926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.929116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.929160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.929426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.929478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.929683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.929708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.929955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.930007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.930239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.930292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.930424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.930475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.930664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.930704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.930889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.930916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.931120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.931168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.931350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.931400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.931574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.931598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.931781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.931807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.931955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.932006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.932235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.932259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.932470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.932519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.456 qpair failed and we were unable to recover it. 00:33:28.456 [2024-10-08 18:43:56.932760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.456 [2024-10-08 18:43:56.932784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.932891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.932915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.933049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.933139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.933363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.933388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.933567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.933591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.933815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.933868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.933997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.934053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.934267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.934318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.934501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.934525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.934704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.934728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.934875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.934924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.935135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.935187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.935444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.935495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.935690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.935732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.935948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.935999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.936225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.936274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.936496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.936547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.936727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.936768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.936909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.936967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.937172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.937223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.937420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.937475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.937621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.937669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.937861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.937910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.938095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.938143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.938327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.938378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.938547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.938571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.938845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.938895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.939162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.939208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.939354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.939404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.939563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.939587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.939781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.939838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.940091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.940142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.940318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.940367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.940589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.940613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.940884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.940940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.941092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.941144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.941241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.941329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.941522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.941548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.941740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.941790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.941959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.942011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.457 qpair failed and we were unable to recover it. 00:33:28.457 [2024-10-08 18:43:56.942210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.457 [2024-10-08 18:43:56.942260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.942466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.942492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.942693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.942718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.942878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.942921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.943118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.943168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.943333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.943383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.943562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.943592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.943744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.943799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.943982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.944033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.944197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.944247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.944428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.944453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.944565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.944604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.944791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.944843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.945029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.945055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.945315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.945365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.945581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.945605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.458 [2024-10-08 18:43:56.945821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.458 [2024-10-08 18:43:56.945871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.458 qpair failed and we were unable to recover it. 00:33:28.739 [2024-10-08 18:43:56.945982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.739 [2024-10-08 18:43:56.946070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.739 qpair failed and we were unable to recover it. 00:33:28.739 [2024-10-08 18:43:56.946216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.946277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.946406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.946446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.946550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.946575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.946771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.946827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.947030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.947081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.947258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.947301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.947473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.947499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.947680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.947707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.947878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.947930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.948118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.948168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.948369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.948395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.948532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.948557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.948736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.948789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.948978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.949029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.949215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.949265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.949447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.949475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.949662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.949689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.949866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.949917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.950112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.950163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.950332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.950384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.950551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.950576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.950837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.950888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.951107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.951159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.951402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.951454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.951574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.951598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.951872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.951923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.952119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.952169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.952302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.952361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.952587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.952616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.952836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.952887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.953042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.953101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.953300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.953350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.953530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.953555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.953730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.953781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.953963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.954012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.954182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.740 [2024-10-08 18:43:56.954224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.740 qpair failed and we were unable to recover it. 00:33:28.740 [2024-10-08 18:43:56.954397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.954428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.954693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.954719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.954921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.954966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.955167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.955215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.955444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.955495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.955704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.955730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.955996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.956047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.956288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.956339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.956508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.956533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.956638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.956692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.956856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.956911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.957047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.957101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.957266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.957317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.957488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.957513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.957657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.957682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.957854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.957880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.957992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.958017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.958159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.958199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.958390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.958429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.958611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.958656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.958811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.958860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.959070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.959119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.959374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.959421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.959726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.959752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.960003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.960055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.960264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.960314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.960533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.960558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.960813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.960838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.960990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.961048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.961201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.961251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.961434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.961484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.961707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.961732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.961869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.961922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.962134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.962184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.962356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.962406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.962588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.962612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.962870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.962920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.963141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.741 [2024-10-08 18:43:56.963193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.741 qpair failed and we were unable to recover it. 00:33:28.741 [2024-10-08 18:43:56.963452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.963500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.963726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.963815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.964024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.964073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.964282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.964332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.964574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.964599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.964816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.964868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.965098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.965150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.965331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.965381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.965646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.965678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.965884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.965911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.966093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.966144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.966322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.966372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.966614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.966658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.966866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.966892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.967013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.967078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.967262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.967313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.967586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.967611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.967763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.967790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.967917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.967968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.968059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.968084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.968254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.968322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.968538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.968564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.968761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.968787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.968911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.968970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.969102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.969142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.969318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.969368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.969518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.969543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.969724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.969773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.970038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.970088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.970243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.970271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.970438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.970463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.970603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.970643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.970867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.970919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.971160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.971211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.971465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.971520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.971733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.971793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.971948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.972002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.742 [2024-10-08 18:43:56.972200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.742 [2024-10-08 18:43:56.972252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.742 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.972428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.972462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.972646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.972685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.972857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.972907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.973057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.973109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.973336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.973386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.973597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.973621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.973828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.973878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.974075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.974127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.974341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.974394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.974572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.974597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.974833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.974883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.975133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.975182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.975376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.975436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.975638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.975688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.975845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.975894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.976096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.976147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.976311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.976397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.976629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.976681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.976861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.976887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.977065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.977115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.977380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.977430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.977559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.977583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.977830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.977856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.978092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.978140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.978375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.978425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.978598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.978623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.978800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.978826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.978986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.979040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.979301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.979353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.979511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.979536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.979751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.979801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.980013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.980053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.980216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.980268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.980504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.980528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.980790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.980840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.980972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.981031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.981183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.981241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.981426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.981450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.981621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.743 [2024-10-08 18:43:56.981645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.743 qpair failed and we were unable to recover it. 00:33:28.743 [2024-10-08 18:43:56.981883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.981932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.982143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.982192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.982415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.982467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.982688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.982715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.982961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.983011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.983160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.983219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.983426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.983482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.983698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.983725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.983929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.983989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.984149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.984216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.984423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.984474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.984724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.984751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.984905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.984992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.985231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.985282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.985448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.985500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.985735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.985787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.985968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.986011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.986223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.986264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.986440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.986465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.986598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.986637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.986838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.986863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.987059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.987085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.987293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.987342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.987540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.987564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.987791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.987843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.988065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.988118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.988264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.988317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.988487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.988510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.988691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.988779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.989019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.989070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.989236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.989287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.989473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.989498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.989691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.989740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.989989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.744 [2024-10-08 18:43:56.990042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.744 qpair failed and we were unable to recover it. 00:33:28.744 [2024-10-08 18:43:56.990280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.990329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.990500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.990525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.990753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.990817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.990976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.991028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.991233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.991284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.991427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.991451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.991655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.991681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.991840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.991865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.992051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.992100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.992318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.992342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.992485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.992509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.992700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.992726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.992893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.992944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.993200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.993250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.993366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.993390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.993527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.993553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.993648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.993679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.993823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.993850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.993982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.994011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.994181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.994205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.994368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.994409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.994571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.994603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.994777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.994803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.994962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.995001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.995188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.995222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.995464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.995489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.995688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.995752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.995892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.995941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.996125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.996175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.996400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.996425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.996645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.996677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.996885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.996934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.997072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.997130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.997274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.997329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.997518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.997544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.997731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.997793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.997961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.998013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.998184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.998235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.745 qpair failed and we were unable to recover it. 00:33:28.745 [2024-10-08 18:43:56.998442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.745 [2024-10-08 18:43:56.998468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:56.998659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:56.998683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:56.998842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:56.998890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:56.999029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:56.999075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:56.999228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:56.999252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:56.999469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:56.999492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:56.999790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:56.999817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:56.999968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.000020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.000228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.000274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.000507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.000531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.000786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.000847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.001101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.001151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.001297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.001321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.001536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.001575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.001788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.001847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.002051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.002101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.002335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.002381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.002546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.002569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.002780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.002836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.002998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.003043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.003302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.003352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.003572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.003596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.003806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.003859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.004024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.004081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.004236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.004292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.004423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.004462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.004661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.004685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.004881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.004933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.005088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.005132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.005328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.005379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.005523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.005548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.005742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.005790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.006040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.006095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.006333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.006383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.006545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.006569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.006745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.006794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.006994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.007045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.007289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.007336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.007579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.007604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.746 [2024-10-08 18:43:57.007864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.746 [2024-10-08 18:43:57.007919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.746 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.008108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.008158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.008345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.008397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.008622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.008647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.008813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.008837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.009031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.009082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.009233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.009281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.009467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.009492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.009700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.009727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.009866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.009921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.010119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.010168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.010306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.010330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.010590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.010615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.010828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.010855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.011091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.011140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.011351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.011402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.011664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.011689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.011951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.011976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.012136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.012187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.012373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.012422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.012615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.012640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.012824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.012850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.013103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.013152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.013412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.013463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.013678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.013704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.013834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.013863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.014069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.014120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.014338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.014390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.014576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.014601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.014854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.014880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.015142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.015191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.015399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.015450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.015659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.015685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.015933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.015979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.016138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.016190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.016363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.016411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.016617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.016642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.016905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.016931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.017162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.017211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.017344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.017404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.747 [2024-10-08 18:43:57.017501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.747 [2024-10-08 18:43:57.017526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.747 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.017646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.017680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.017833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.017889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.018049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.018101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.018302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.018347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.018539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.018563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.018756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.018782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.018964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.019007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.019138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.019195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.019370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.019394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.019526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.019566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.019719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.019746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.019898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.019939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.020087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.020132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.020280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.020306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.020532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.020557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.020720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.020747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.020868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.020895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.021021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.021047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.021293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.021317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.021481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.021507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.021718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.021745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.021881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.021908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.022056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.022097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.022245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.022271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.022358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.022384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.022513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.022538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.022690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.022716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.022823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.022848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.023039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.023064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.023256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.023281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.023466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.023490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.023731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.023783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.023971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.024024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.024188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.024247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.748 [2024-10-08 18:43:57.024427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.748 [2024-10-08 18:43:57.024453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.748 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.024732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.024783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.024958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.025021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.025196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.025246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.025378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.025404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.025524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.025550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.025678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.025705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.025856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.025920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.026087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.026138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.026313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.026366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.026509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.026536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.026658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.026685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.026829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.026880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.027011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.027062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.027234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.027283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.027377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.027404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.027556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.027583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.027724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.027777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.027947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.027972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.028076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.028104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.028222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.028249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.028406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.028432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.028553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.028579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.028678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.028705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.028842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.028893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.029014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.029040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.029192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.029219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.029359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.029386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.029535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.029562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.029661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.029688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.029839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.029891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.030026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.030086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.030238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.030265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.030416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.030443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.030561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.030587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.030716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.030773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.030899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.030926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.031075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.031102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.031251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.031281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.749 [2024-10-08 18:43:57.031423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.749 [2024-10-08 18:43:57.031449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.749 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.031606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.031632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.031770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.031820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.031971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.032041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.032228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.032277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.032451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.032478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.032655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.032682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.032828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.032881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.033058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.033108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.033260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.033311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.033477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.033503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.033661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.033700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.033880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.033930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.034092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.034145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.034309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.034365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.034460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.034487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.034637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.034671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.034774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.034800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.034930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.034995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.035128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.035179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.035301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.035328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.035416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.035443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.035571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.035598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.035707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.035734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.035858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.035884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.036002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.036030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.036183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.036210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.036361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.036388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.036522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.036548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.036645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.036677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.036808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.036835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.036988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.037021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.037182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.037209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.037326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.037352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.037513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.037540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.037662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.037689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.037838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.037902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.038025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.038051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.038253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.038280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.750 [2024-10-08 18:43:57.038377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.750 [2024-10-08 18:43:57.038408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.750 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.038509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.038536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.038668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.038695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.038822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.038873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.039002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.039029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.039154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.039181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.039293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.039334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.039567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.039593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.039727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.039787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.039915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.039965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.040132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.040158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.040325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.040352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.040599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.040626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.040800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.040853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.041026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.041087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.041241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.041294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.041425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.041451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.041585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.041611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.041733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.041759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.041886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.041913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.042039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.042066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.042292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.042318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.042468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.042495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.042632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.042664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.042795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.042856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.043025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.043052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.043190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.043241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.043345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.043373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.043517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.043552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.043727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.043753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.043855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.043882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.044057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.044084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.044221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.044250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.044405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.044431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.044561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.044601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.044713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.044740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.044858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.044885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.045010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.045037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.045168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.045194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.045330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.751 [2024-10-08 18:43:57.045357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.751 qpair failed and we were unable to recover it. 00:33:28.751 [2024-10-08 18:43:57.045485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.045516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.045648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.045680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.045829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.045856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.045989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.046015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.046156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.046182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.046347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.046389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.046564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.046590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.046734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.046798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.046911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.046974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.047089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.047144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.047274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.047300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.047476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.047503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.047677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.047746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.047870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.047922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.048057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.048085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.048250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.048277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.048406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.048433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.048555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.048581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.048745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.048795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.048918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.048974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.049085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.049111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.049274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.049301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.049437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.049464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.049608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.049635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.049742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.049769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.049932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.049959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.050091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.050117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.050255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.050282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.050420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.050446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.050630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.050662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.050752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.050778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.050970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.051024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.051170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.051223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.051361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.051388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.051522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.051548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.051702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.051729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.051863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.051929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.052054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.052081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.052212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.052238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.752 qpair failed and we were unable to recover it. 00:33:28.752 [2024-10-08 18:43:57.052369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.752 [2024-10-08 18:43:57.052395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.052538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.052568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.052724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.052751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.052883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.052909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.053047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.053073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.053198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.053224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.053383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.053409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.053510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.053537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.053701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.053728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.053881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.053909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.054048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.054074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.054236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.054263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.054420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.054446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.054584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.054611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.054746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.054798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.054938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.054964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.055100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.055126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.055278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.055304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.055469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.055495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.055624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.055699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.055796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.055822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.055945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.055971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.056109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.056136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.056268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.056294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.056428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.056454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.056625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.056658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.056761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.056787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.056886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.056912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.057031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.057058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.057179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.057205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.057383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.057410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.057504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.057531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.057707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.057734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.057857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.057883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.753 qpair failed and we were unable to recover it. 00:33:28.753 [2024-10-08 18:43:57.058006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.753 [2024-10-08 18:43:57.058032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.058198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.058224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.058352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.058378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.058505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.058532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.058684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.058711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.058818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.058845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.058983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.059010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.059140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.059174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.059341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.059367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.059532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.059558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.059729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.059756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.059885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.059911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.060003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.060029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.060216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.060242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.060400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.060426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.060526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.060552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.060727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.060754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.060837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.060863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.060981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.061006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.061139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.061166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.061293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.061320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.061473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.061500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.061629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.061662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.061746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.061773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.061929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.061987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.062119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.062147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.062303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.062369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.062589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.062671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.062873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.062939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.063202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.063269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.063432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.063459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.063587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.063613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.063732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.063759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.063862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.063888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.064120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.064185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.064460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.064486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.064637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.064683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.064799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.064825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.064949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.064975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.754 [2024-10-08 18:43:57.065163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.754 [2024-10-08 18:43:57.065229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.754 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.065440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.065506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.065733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.065760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.065919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.065984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.066215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.066271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.066421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.066447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.066550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.066576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.066724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.066750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.066853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.066882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.067005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.067031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.067229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.067291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.067519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.067583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.067773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.067799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.067897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.067922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.068071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.068096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.068231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.068256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.068420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.068486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.068735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.068761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.068863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.068890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.069061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.069125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.069378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.069443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.069716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.069743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.069829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.069854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.069998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.070024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.070257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.070284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.070418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.070445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.070602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.070627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.070752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.070778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.070876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.070901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.071149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.071211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.071511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.071575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.071800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.071826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.072010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.072036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.072189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.072251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.072446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.072508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.072735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.072761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.072881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.072906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.073122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.073192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.073376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.073439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.073628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.755 [2024-10-08 18:43:57.073722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.755 qpair failed and we were unable to recover it. 00:33:28.755 [2024-10-08 18:43:57.073823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.073848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.074009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.074084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.074377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.074454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.074609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.074644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.074756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.074782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18dc000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.074896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.074935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.075093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.075147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.075319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.075371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.075869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.075915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.076121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.076170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.076353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.076405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.076500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.076526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.076686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.076714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.076831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.076887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.077029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.077063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.077251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.077296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.077405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.077432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.077623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.077656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.077747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.077774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.077876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.077902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.078046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.078073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.078237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.078289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.078426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.078453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.078614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.078640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.078787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.078840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.078975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.079032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.079301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.079327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.079462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.079489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.079608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.079634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.079741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.079768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.079867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.079894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.080055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.080082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.080208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.080234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.080363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.080389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.080548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.080575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.080690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.080718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.080805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.080832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.080993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.081053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.081216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.756 [2024-10-08 18:43:57.081267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.756 qpair failed and we were unable to recover it. 00:33:28.756 [2024-10-08 18:43:57.081366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.081393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.081562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.081588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.081731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.081788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.081913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.081973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.082154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.082203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.082354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.082381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.082529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.082556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.082735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.082787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.082916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.082984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.083152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.083183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.083343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.083369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.083513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.083539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.083692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.083719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.083823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.083850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.083965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.083991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.084112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.084139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.084232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.084258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.084383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.084409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.084562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.084589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.084679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.084706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.084800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.084827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.084984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.085010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.085112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.085168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.085295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.085322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.085445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.085471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.085622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.085751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.085875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.085940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.086079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.086105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.086240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.086266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.086479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.086505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.086663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.086690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.086796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.086822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.086997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.087024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.087139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.087195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.087354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.087380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.757 [2024-10-08 18:43:57.087464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.757 [2024-10-08 18:43:57.087490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.757 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.087662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.087689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.087823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.087872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.088028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.088077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.088195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.088251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.088392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.088418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.088577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.088603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.088737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.088795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.088919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.088973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.089130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.089156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.089288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.089321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.089472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.089498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.089633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.089714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.089816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.089842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.090036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.090067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.090199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.090226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.090358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.090384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.090540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.090567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.090737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.090791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.090907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.090933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.091025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.091051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.091156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.091182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.091349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.091375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.091499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.091525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.091689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.091716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.091849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.091875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.091993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.092020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.092120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.092146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.092274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.092300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.092401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.092427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.092567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.092594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.092750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.092814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.092942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.092992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.093158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.093185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.093322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.093348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.093479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.093506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.093687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.093734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.093871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.093922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.094064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.094114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.094280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.758 [2024-10-08 18:43:57.094306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.758 qpair failed and we were unable to recover it. 00:33:28.758 [2024-10-08 18:43:57.094457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.094484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.094615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.094642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.094789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.094842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.094974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.095029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.095174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.095231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.095376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.095403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.095560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.095587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.095719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.095788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.095922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.095976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.096136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.096162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.096294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.096321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.096446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.096472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.096640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.096672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.096792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.096849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.097057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.097107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.097284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.097343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.097562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.097588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.097782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.097835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.098009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.098058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.098277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.098337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.098484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.098510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.098714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.098779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.098903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.098961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.099120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.099174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.099329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.099358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.099461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.099487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.099658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.099685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.099832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.099878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.100033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.100085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.100282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.100308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.100474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.100504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.100629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.100662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.100762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.100788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.100899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.100925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.101031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.101057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.101179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.101205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.101364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.101390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.101477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.101503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.101642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.759 [2024-10-08 18:43:57.101678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.759 qpair failed and we were unable to recover it. 00:33:28.759 [2024-10-08 18:43:57.101786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.101812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.101952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.101979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.102128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.102159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.102284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.102310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.102486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.102512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.102605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.102631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.102772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.102827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.102925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.102952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.103083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.103109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.103245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.103272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.103431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.103457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.103624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.103664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.103799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.103851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.103973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.103999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.104155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.104181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.104320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.104346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.104493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.104519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.104645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.104681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.104808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.104834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.104985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.105011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.105114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.105140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.105251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.105277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.105435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.105461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.105671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.105699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.105811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.105865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.105997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.106053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.106177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.106231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.106456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.106483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.106658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.106684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.106826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.106881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.107052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.107105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.107264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.107314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.107464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.107490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.107604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.107630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.107738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.107765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.107883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.107909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.108026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.108052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.108159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.108186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.108349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.760 [2024-10-08 18:43:57.108376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.760 qpair failed and we were unable to recover it. 00:33:28.760 [2024-10-08 18:43:57.108503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.108530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.108631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.108663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.108780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.108806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.108897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.108927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.109034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.109061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.109222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.109248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.109343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.109370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.109531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.109558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.109693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.109719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.109815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.109841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.109967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.109993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.110177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.110203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.110359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.110386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.110545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.110571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.110675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.110702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.110864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.110911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.111017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.111080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.111197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.111224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.111341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.111368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.111564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.111591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.111722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.111781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.111897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.111962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.112135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.112161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.112282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.112308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.112488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.112515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.112668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.112696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.112822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.112875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.113068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.113120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.113258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.113291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.113439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.113465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.113601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.113627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.113767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.113829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.113967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.114018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.114208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.114262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.114365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.114391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.114529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.114555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.114710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.114736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.114842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.114869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.114982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.115011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.115135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.761 [2024-10-08 18:43:57.115161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.761 qpair failed and we were unable to recover it. 00:33:28.761 [2024-10-08 18:43:57.115322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.115349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.115480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.115507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.115708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.115735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.115828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.115858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.116018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.116044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.116164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.116191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.116286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.116312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.116448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.116474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.116591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.116617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.116729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.116755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.116859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.116886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.117053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.117079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.117187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.117213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.117349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.117375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.117552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.117579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.117706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.117732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.117824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.117850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.118004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.118031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.118161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.118227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.118404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.118430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.118589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.118616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.118793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.118843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.118975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.119028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.119169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.119195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.119321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.119347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.119487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.119513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.119633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.119667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.119800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.119827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.119934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.119959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.120121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.120147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.120313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.120340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.120462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.120488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.120621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.120647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.120763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.120797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.120902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.762 [2024-10-08 18:43:57.120928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.762 qpair failed and we were unable to recover it. 00:33:28.762 [2024-10-08 18:43:57.121050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.121077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.121276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.121303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.121474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.121500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.121629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.121662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.121767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.121793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.121894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.121920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.122051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.122078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.122213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.122239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.122429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.122460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.122659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.122686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.122808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.122863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.123011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.123037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.123166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.123192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.123315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.123341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.123489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.123516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.123680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.123706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.123829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.123883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.124118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.124176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.124341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.124368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.124466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.124492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.124661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.124688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.124825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.124874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.125079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.125132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.125299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.125352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.125484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.125510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.125616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.125641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.125771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.125797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.125888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.125915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.126039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.126065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.126226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.126253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.126415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.126441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.126575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.126602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.126744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.126771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.126883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.126909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.127045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.127072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.127199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.127226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.127462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.127488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.127638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.127674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.127776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.127802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.763 qpair failed and we were unable to recover it. 00:33:28.763 [2024-10-08 18:43:57.127894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.763 [2024-10-08 18:43:57.127921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.128064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.128090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.128225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.128251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.128402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.128428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.128552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.128578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.128674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.128701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.128828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.128881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.129036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.129082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.129225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.129275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.129433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.129464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.129583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.129609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.129786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.129839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.129961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.130032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.130198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.130250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.130413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.130439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.130576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.130602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.130714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.130741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.130883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.130933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.131063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.131089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.131256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.131282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.131439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.131466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.131625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.131657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.131796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.131850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.131982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.132008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.132141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.132168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.132315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.132342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.132526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.132552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.132670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.132697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.132813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.132867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.132985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.133043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.133226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.133270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.133441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.133468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.133626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.133657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.133792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.133845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.133963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.134028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.134167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.134211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.134345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.134372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.134496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.134523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.134645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.134676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.134770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.764 [2024-10-08 18:43:57.134797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.764 qpair failed and we were unable to recover it. 00:33:28.764 [2024-10-08 18:43:57.134949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.134975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.135098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.135123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.135252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.135288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.135524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.135550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.135676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.135702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.135832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.135858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.136068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.136119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.136267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.136322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.136453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.136478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.136609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.136661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.136818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.136868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.136985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.137011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.137162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.137213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.137356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.137382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.137533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.137568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.137702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.137729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.137880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.137906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.138110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.138161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.138312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.138364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.138497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.138523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.138674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.138700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.138821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.138881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.139042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.139095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.139251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.139276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.139421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.139447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.139609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.139634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.139783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.139834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.139985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.140028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.140209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.140262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.140416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.140441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.140592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.140629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.140799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.140849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.140993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.141018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.141179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.141205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.141335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.141360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.141515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.141540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.141691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.141718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.141880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.141931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.142089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.142140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.142278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.765 [2024-10-08 18:43:57.142323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.765 qpair failed and we were unable to recover it. 00:33:28.765 [2024-10-08 18:43:57.142410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.142435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.142584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.142610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.142747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.142806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.142909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.142935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.143176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.143201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.143333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.143362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.143482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.143507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.143663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.143690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.143821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.143883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.144140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.144194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.144351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.144377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.144583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.144623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.144772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.144825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.144949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.145007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.145205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.145257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.145402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.145427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.145611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.145636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.145789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.145841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.145985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.146039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.146208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.146233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.146402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.146426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.146561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.146600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.146724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.146751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.146850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.146877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.147009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.147034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.147182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.147222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.147369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.147394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.147541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.147566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.147711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.147738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.147845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.147871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.147982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.148007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.148134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.148159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.148304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.148345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.148501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.148526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.766 [2024-10-08 18:43:57.148666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.766 [2024-10-08 18:43:57.148692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.766 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.148817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.148866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.149070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.149119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.149304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.149329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.149515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.149539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.149691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.149760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.149929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.149992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.150215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.150264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.150421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.150457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.150599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.150638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.150789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.150848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.151022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.151070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.151252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.151277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.151393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.151432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.151575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.151600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.151728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.151760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.151857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.151883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.152021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.152047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.152206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.152231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.152420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.152460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.152591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.152615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.152740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.152767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.152892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.152917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.153151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.153175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.153431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.153456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.153656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.153682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.153821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.153873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.154034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.154076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.154258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.154309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.154562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.154587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.154791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.154840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.154985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.155033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.155190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.155225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.155395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.155449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.155603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.155628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.155801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.155858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.156003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.156042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.156156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.156195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.767 [2024-10-08 18:43:57.156361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.767 [2024-10-08 18:43:57.156402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.767 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.156561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.156586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.156732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.156759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.156888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.156914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.157047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.157087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.157231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.157272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.157420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.157461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.157603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.157644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.157775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.157802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.157945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.157971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.158126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.158152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.158310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.158337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.158481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.158521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.158638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.158696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.158821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.158848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.159026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.159051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.159250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.159285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.159498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.159528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.159700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.159739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.159850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.159898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.160016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.160062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.160176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.160224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.160410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.160436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.160598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.160624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.160786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.160813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.160916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.160942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.161091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.161121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.161262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.161289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.161511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.161551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.161707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.161733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.161861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.161888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.162026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.162053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.162175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.768 [2024-10-08 18:43:57.162202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.768 qpair failed and we were unable to recover it. 00:33:28.768 [2024-10-08 18:43:57.162379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.162405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.162532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.162558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.162684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.162711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.162824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.162850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.162975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.163001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.163168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.163193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.163423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.163448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.163559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.163585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.163746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.163773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.163893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.163934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.164087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.164112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.164289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.164318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.164487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.164512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.164717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.164743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.164879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.164903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.165041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.165081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.165219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.165259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.165368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.165393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.165515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.165540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.165735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.165762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.165894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.165928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.166058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.166099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.166208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.166245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.166388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.166413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.166498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.166528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.166682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.166709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.166812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.166839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.166935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.166976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.167123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.167149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.167315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.167339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.167444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.167469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.167659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.167701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.167834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.167860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.168031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.168083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.168223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.168278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.168457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.168482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.168610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.168656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.168792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.168850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.168992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.769 [2024-10-08 18:43:57.169056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.769 qpair failed and we were unable to recover it. 00:33:28.769 [2024-10-08 18:43:57.169157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.169197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.169344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.169370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.169503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.169529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.169671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.169713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.169820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.169846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.169978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.170003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.170141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.170181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.170299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.170324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.170502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.170541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.170681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.170708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.170834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.170886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.171026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.171067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.171250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.171280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.171428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.171452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.171635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.171671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.171813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.171867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.172009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.172057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.172225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.172276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.172455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.172480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.172644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.172691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.172784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.172810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.172961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.172986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.173186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.173235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.173452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.173503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.173645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.173703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.173869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.173926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.174065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.174117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.174304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.174329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.174491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.174516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.174684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.174710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.174821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.174847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.174986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.175012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.175218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.175243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.175375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.175400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.175548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.175573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.175764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.175816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.176091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.176135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.176276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.176327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.176521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.176546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.770 qpair failed and we were unable to recover it. 00:33:28.770 [2024-10-08 18:43:57.176718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.770 [2024-10-08 18:43:57.176770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.176893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.176958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.177118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.177167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.177363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.177387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.177574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.177599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.177747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.177801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.177938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.177998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.178143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.178182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.178342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.178382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.178551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.178576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.178696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.178723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.178875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.178926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.179103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.179155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.179313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.179338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.179514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.179538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.179700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.179764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.179971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.180021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.180216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.180269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.180450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.180475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.180655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.180695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.180825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.180877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.181083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.181139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.181368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.181419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.181661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.181686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.181812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.181838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.182021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.182070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.182267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.182328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.182544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.182569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.182711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.182737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.182896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.182958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.183164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.183213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.183412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.183461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.183653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.183678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.183823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.183849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.184016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.184068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.771 qpair failed and we were unable to recover it. 00:33:28.771 [2024-10-08 18:43:57.184319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.771 [2024-10-08 18:43:57.184368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.184500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.184525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.184677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.184704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.184834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.184886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.185150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.185199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.185372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.185425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.185599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.185623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.185757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.185845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.186059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.186107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.186227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.186252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.186427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.186453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.186692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.186719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.186831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.186886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.187071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.187118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.187286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.187327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.187507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.187532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.187708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.187748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.187845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.187901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.188150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.188198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.188392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.188416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.188593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.188618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.188765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.188817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.188975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.189031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.189244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.189293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.189548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.189572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.189776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.189801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.189994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.190049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.190198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.190249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.190401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.190426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.190569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.190608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.190818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.190871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.191015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.191067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.191222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.191274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.191500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.191525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.191752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.191778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.191920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.191945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.192101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.192140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.192360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.192384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.192645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.192691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.192891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.772 [2024-10-08 18:43:57.192941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.772 qpair failed and we were unable to recover it. 00:33:28.772 [2024-10-08 18:43:57.193094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.193145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.193314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.193346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.193478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.193517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.193668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.193694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.193901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.193948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.194151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.194200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.194388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.194412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.194595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.194636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.194773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.194830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.195044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.195095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.195317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.195365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.195556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.195580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.195730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.195785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.195934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.195982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.196142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.196190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.196350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.196398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.196597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.196621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.196809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.196859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.197097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.197152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.197385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.197433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.197515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.197539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.197750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.197804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.198019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.198068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.198292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.198343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.198567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.198592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.198806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.198857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.198981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.199049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.199240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.199290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.199514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.199538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.199737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.199795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.199927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.199988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.200137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.200188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.200386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.200411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.200611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.200635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.200786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.200872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.201061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.201114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.201261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.201313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.201523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.201548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.201767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.201827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.201962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.773 [2024-10-08 18:43:57.202022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.773 qpair failed and we were unable to recover it. 00:33:28.773 [2024-10-08 18:43:57.202211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.202263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.202438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.202462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.202706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.202732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.202874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.202924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.203070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.203118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.203379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.203429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.203608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.203647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.203830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.203882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.204108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.204156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.204354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.204406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.204584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.204609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.204791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.204855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.205075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.205125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.205297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.205348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.205503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.205528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.205681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.205708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.205895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.205946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.206174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.206225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.206398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.206427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.206565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.206605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.206806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.206856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.207050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.207101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.207355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.207404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.207608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.207633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.207859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.207910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.208147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.208194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.208325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.208374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.208543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.208567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.208800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.208850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.209031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.209086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.209239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.209289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.209422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.209462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.209616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.209641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.209819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.209871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.210061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.210113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.210318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.210374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.210541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.210567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.210766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.210822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.211012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.211064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.774 [2024-10-08 18:43:57.211265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.774 [2024-10-08 18:43:57.211317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.774 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.211487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.211512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.211684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.211711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.211854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.211903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.212093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.212145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.212325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.212376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.212563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.212589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.212785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.212836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.212977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.213030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.213198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.213243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.213437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.213462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.213612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.213636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.213828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.213883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.214017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.214073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.214248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.214273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.214452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.214476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.214580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.214620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.214808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.214858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.215054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.215107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.215291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.215321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.215477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.215502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.215686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.215740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.215869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.215922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.216142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.216191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.216443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.216492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.216700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.216754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.216904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.216961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.217178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.217226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.217474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.217499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.217775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.217826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.218010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.218064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.218243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.218292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.218539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.218564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.218765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.218792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.218934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.218978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.219115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.775 [2024-10-08 18:43:57.219168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.775 qpair failed and we were unable to recover it. 00:33:28.775 [2024-10-08 18:43:57.219340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.219365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.219543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.219568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.219702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.219729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.219871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.219935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.220121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.220175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.220341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.220366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.220501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.220542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.220693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.220721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.220837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.220885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.220994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.221019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.221188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.221229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.221325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.221350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.221488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.221513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.221658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.221703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.221804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.221830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.221986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.222013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.222148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.222189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.222294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.222326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.222519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.222559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.222740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.222765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.222865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.222890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.223090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.223114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.223287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.223312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.223515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.223547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.223785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.223854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.224105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.224154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.224264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.224314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.224541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.224565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.224708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.224769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.225025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.225078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.225286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.225335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.225456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.225481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.225657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.225684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.225800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.225857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.225974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.226038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.226232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.226280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.226439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.226463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.226696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.226723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.226880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.226939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.227059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.776 [2024-10-08 18:43:57.227125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.776 qpair failed and we were unable to recover it. 00:33:28.776 [2024-10-08 18:43:57.227330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.227378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.227524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.227559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.227778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.227827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.228014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.228064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.228320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.228368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.228607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.228647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.228838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.228889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.229118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.229170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.229347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.229396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.229573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.229597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.229749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.229837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.230066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.230116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.230397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.230446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.230678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.230704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.230837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.230892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.231068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.231116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.231323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.231372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.231612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.231658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.231779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.231866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.232082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.232131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.232350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.232402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.232595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.232619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.232788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.232814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.233007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.233061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.233281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.233330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.233484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.233509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.233689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.233716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.233844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.233895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.234077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.234129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.234322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.234362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.234565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.234590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.234780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.234842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.235049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.235099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.235361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.235411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.235641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.235686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.235851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.235903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.236104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.236155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.236412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.236460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.236665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.236691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.777 [2024-10-08 18:43:57.236824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.777 [2024-10-08 18:43:57.236879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.777 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.237050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.237099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.237324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.237375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.237501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.237526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.237680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.237707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.237854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.237913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.238067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.238119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.238297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.238330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.238520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.238544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.238703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.238728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.238864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.238889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.239061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.239086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.239213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.239253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.239409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.239434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.239557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.239582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.239795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.239820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.240073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.240121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.240235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.240286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.240483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.240508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.240754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.240819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.240984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.241031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.241247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.241300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.241438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.241463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.241590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.241615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.241824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.241855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.241974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.241998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.242144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.242180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.242396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.242421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.242573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.242598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.242773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.242842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.243017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.243041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.243204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.243258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.243405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.243429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.243699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.243725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.243853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.243878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.244019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.244059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.244206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.244232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.244407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.244447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.244642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.244691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.244822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.244880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.778 [2024-10-08 18:43:57.245030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.778 [2024-10-08 18:43:57.245055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.778 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.245242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.245267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.245436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.245461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.245641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.245697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.245856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.245883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.246149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.246201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.246389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.246430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.246606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.246645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.246822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.246874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.247140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.247190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.247367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.247415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.247617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.247664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.247874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.247901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.248101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.248152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.248343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.248392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.248545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.248570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.248758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.248784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.248991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.249042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.249279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.249330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.249530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.249555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.249739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.249797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.249991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.250040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.250240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.250292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.250509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.250533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.250655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.250684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.250808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.250835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.251046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.251107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.251308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.251357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.251596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.251621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.251785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.251835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.252006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.252055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.252241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.252295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.252518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.252543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.252733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.252783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.252934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.252984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.253195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.253246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.779 qpair failed and we were unable to recover it. 00:33:28.779 [2024-10-08 18:43:57.253483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.779 [2024-10-08 18:43:57.253508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.780 qpair failed and we were unable to recover it. 00:33:28.780 [2024-10-08 18:43:57.253767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.780 [2024-10-08 18:43:57.253828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.780 qpair failed and we were unable to recover it. 00:33:28.780 [2024-10-08 18:43:57.254042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.780 [2024-10-08 18:43:57.254094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:28.780 qpair failed and we were unable to recover it. 00:33:29.064 [2024-10-08 18:43:57.254270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.254327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.254501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.254527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.254720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.254745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.254929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.254977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.255172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.255197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.255362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.255387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.255586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.255611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.255831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.255885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.256047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.256100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.256277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.256327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.256483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.256509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.256702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.256752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.256955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.257008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.257212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.257267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.257440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.257480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.257660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.257717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.257951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.258019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.258293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.258353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.258519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.258545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.258693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.258733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.258911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.258971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.259169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.259219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.259421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.259472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.259583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.259608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.259817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.259869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.260007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.260065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.260242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.260292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.260431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.260455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.260626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.260658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.260834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.260886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.261007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.261058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.261214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.261254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.261434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.261459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.261594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.261620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.261766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.261792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.261969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.261994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.262131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.262156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.262256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.262282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.065 qpair failed and we were unable to recover it. 00:33:29.065 [2024-10-08 18:43:57.262462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.065 [2024-10-08 18:43:57.262487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.262690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.262717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.262839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.262898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.263055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.263080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.263217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.263241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.263385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.263410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.263553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.263594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.263724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.263750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.263861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.263887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.264037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.264078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.264247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.264272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.264450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.264475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.264625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.264655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.264839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.264877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.265019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.265078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.265251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.265302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.265462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.265488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.265646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.265711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.265820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.265846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.266003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.266029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.266210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.266235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.266379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.266404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.266580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.266606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.266806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.266859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.267045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.267094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.267245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.267296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.267432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.267472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.267664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.267696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.267904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.267959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.268173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.268223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.268492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.268541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.268706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.268733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.268895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.268943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.269121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.269171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.269426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.269477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.269564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.269598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.269804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.269856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.270119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.270169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.270403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.270453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.270633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.066 [2024-10-08 18:43:57.270686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.066 qpair failed and we were unable to recover it. 00:33:29.066 [2024-10-08 18:43:57.270841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.270894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.271132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.271183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.271374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.271424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.271645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.271692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.271901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.271927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.272059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.272119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.272299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.272348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.272496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.272520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.272667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.272708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.272916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.272971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.273176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.273225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.273495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.273545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.273756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.273805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.273991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.274042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.274298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.274351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.274548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.274574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.274766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.274819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.274999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.275050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.275299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.275349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.275478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.275504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.275679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.275729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.275860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.275909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.276037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.276088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.276235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.276287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.276480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.276507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.276674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.276701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.276795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.276822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.277070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.277100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.277339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.277366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.277556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.277582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.277839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.277918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.278109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.278162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.278340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.278385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.278609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.278635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.278877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.278931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.279096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.279160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.279343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.279395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.279544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.279570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.279765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.279817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.067 [2024-10-08 18:43:57.279994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.067 [2024-10-08 18:43:57.280047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.067 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.280217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.280267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.280368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.280394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.280638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.280669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.280870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.280920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.281107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.281152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.281337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.281389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.281510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.281536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.281665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.281692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.281844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.281895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.282134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.282191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.282396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.282447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.282585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.282612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.282824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.282877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.283075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.283124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.283317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.283370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.283527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.283553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.283718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.283783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.284038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.284091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.284195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.284255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.284346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.284372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.284524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.284550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.284681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.284708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.284834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.284860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.285067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.285093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.285297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.285348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.285481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.285507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.285669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.285719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.285887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.285941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.286210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.286262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.286498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.286524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.286734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.286801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.286908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.286970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.287115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.287164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.287406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.287433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.287666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.287694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.287944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.287994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.288137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.288189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.288323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.288379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.288477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.068 [2024-10-08 18:43:57.288503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.068 qpair failed and we were unable to recover it. 00:33:29.068 [2024-10-08 18:43:57.288666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.288713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.288954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.289010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.289171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.289225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.289376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.289402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.289527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.289553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.289687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.289714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.289906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.289932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.290124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.290175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.290346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.290373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.290576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.290602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.290761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.290814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.290998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.291051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.291183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.291232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.291399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.291425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.291514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.291540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.291717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.291784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.292038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.292090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.292247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.292298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.292458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.292484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.292582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.292609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.292757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.292811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.292974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.293001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.293133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.293159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.293290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.293316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.293497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.293523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.293643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.293675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.293904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.293930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.294110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.294136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.294339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.294365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.294501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.294527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.294632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.294672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.294855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.294907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.295154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.295203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.295329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.295379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.295579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.069 [2024-10-08 18:43:57.295605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.069 qpair failed and we were unable to recover it. 00:33:29.069 [2024-10-08 18:43:57.295792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.295843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.295968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.296024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.296211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.296262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.296499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.296526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.296682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.296714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.296850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.296904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.297086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.297130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.297268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.297320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.297451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.297477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.297600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.297626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.297786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.297839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.298028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.298077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.298242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.298297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.298429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.298455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.298577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.298603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.298767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.298803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.298992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.299019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.299182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.299208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.299439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.299465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.299719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.299778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.299973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.300031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.300288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.300338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.300498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.300525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.300748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.300797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.300998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.301046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.301236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.301288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.301520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.301546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.301719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.301777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.301928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.301992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.302186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.302237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.302447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.302496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.302688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.302715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.302963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.303013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.303230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.303284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.303471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.303522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.303764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.303814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.303949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.304012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.304247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.304296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.304480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.304506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.070 [2024-10-08 18:43:57.304700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.070 [2024-10-08 18:43:57.304730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.070 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.304961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.305010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.305220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.305274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.305454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.305480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.305596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.305622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.305853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.305907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.306123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.306175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.306415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.306464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.306647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.306679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.306870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.306897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.307048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.307101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.307233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.307294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.307475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.307524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.307746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.307798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.307931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.307957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.308085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.308112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.308342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.308368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.308481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.308507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.308663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.308690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.308918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.308944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.309083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.309110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.309239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.309269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.309431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.309457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.309581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.309607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.309841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.309869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.310051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.310078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.310282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.310336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.310521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.310547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.310793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.310847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.311075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.311128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.311292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.311343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.311493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.311524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.311647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.311680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.311907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.311933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.312160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.312212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.312457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.312514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.312722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.312775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.312982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.313036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.313275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.313331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.313562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.313588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.313837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.071 [2024-10-08 18:43:57.313889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.071 qpair failed and we were unable to recover it. 00:33:29.071 [2024-10-08 18:43:57.314096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.314149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.314285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.314330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.314456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.314483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.314628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.314660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.314844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.314887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.315072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.315115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.315339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.315391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.315519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.315546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.315690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.315718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.315871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.315932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.316121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.316164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.316347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.316399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.316620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.316646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.316827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.316876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.317087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.317138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.317341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.317391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.317567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.317593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.317845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.317898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.318013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.318068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.318280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.318332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.318512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.318545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.318753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.318805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.319005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.319061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.319237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.319289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.319528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.319554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.319709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.319775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.319937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.319986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.320172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.320224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.320386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.320412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.320507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.320533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.320721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.320779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.320938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.320983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.321154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.321203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.321352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.321378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.321610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.321636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.321860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.321916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.322111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.322157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.322298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.322348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.322449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.322481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.322718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.072 [2024-10-08 18:43:57.322745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.072 qpair failed and we were unable to recover it. 00:33:29.072 [2024-10-08 18:43:57.322956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.323008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.323228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.323277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.323508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.323560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.323749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.323808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.323993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.324045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.324240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.324291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.324453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.324479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.324679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.324707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.324859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.324922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.325192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.325243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.325438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.325493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.325640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.325672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.325845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.325872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.326027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.326083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.326277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.326343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.326505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.326531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.326682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.326709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.326891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.326957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.327194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.327243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.327481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.327532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.327745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.327802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.327983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.328029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.328272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.328324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.328513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.328540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.328753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.328812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.329029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.329079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.329295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.329341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.329488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.329514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.329683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.329737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.329924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.329969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.330170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.330222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.330416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.330442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.330617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.330643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.330797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.330855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.331087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.331137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.331387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.073 [2024-10-08 18:43:57.331439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.073 qpair failed and we were unable to recover it. 00:33:29.073 [2024-10-08 18:43:57.331602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.331629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.331797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.331858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.332094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.332138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.332353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.332404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.332607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.332634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.332769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.332806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.332951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.333008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.333184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.333238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.333413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.333465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.333622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.333655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.333849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.333903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.334096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.334146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.334377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.334430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.334589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.334615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.334709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.334735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.334904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.334961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.335206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.335258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.335415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.335466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.335695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.335722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.335844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.335896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.336148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.336199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.336381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.336433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.336600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.336627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.336785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.336843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.337034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.337088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.337247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.337301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.337467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.337493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.337648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.337680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.337870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.337922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.338121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.338173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.338351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.338401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.338567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.338594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.338809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.338836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.339075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.339123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.339341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.339394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.339537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.339564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.339748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.339799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.340034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.340088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.340286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.340337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.074 [2024-10-08 18:43:57.340556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.074 [2024-10-08 18:43:57.340583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.074 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.340784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.340837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.340990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.341044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.341259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.341308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.341509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.341535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.341717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.341776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.342018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.342073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.342221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.342271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.342428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.342455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.342561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.342587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.342803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.342859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.343090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.343138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.343380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.343426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.343554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.343580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.343721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.343777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.344021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.344070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.344284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.344337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.344518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.344545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.344783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.344837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.345072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.345121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.345335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.345386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.345605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.345632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.345826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.345872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.346083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.346138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.346378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.346431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.346613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.346644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.346810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.346857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.347034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.347084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.347315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.347367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.347536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.347563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.347723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.347750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.347885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.347936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.348188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.348240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.348424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.348476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.348640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.348675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.348864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.348915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.349056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.349105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.349347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.349400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.349571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.349598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.349738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.075 [2024-10-08 18:43:57.349766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.075 qpair failed and we were unable to recover it. 00:33:29.075 [2024-10-08 18:43:57.349880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.349957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.350173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.350227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.350480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.350532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.350780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.350837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.351081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.351133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.351332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.351381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.351552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.351578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.351746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.351799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.352010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.352063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.352277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.352328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.352500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.352526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.352640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.352672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.352843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.352904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.353073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.353124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.353300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.353352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.353601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.353628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.353805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.353855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.354055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.354106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.354298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.354352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.354584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.354610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.354822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.354876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.355118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.355172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.355379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.355430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.355639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.355682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.355789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.355815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.355947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.356007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.356251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.356307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.356508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.356558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.356770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.356817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.357064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.357126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.357370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.357422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.357603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.357635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.357771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.357798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.357942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.357992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.358126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.358179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.358363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.358418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.358600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.358626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.358844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.358899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.359106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.359158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.359378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.076 [2024-10-08 18:43:57.359430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.076 qpair failed and we were unable to recover it. 00:33:29.076 [2024-10-08 18:43:57.359562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.359588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.359831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.359888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.360024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.360073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.360233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.360283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.360414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.360440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.360612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.360639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.360787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.360838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.361018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.361061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.361189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.361242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.361429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.361456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.361707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.361734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.361969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.362012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.362222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.362274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.362490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.362541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.362779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.362830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.363000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.363052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.363254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.363304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.363438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.363464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.363689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.363716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.363858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.363911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.364067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.364118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.364353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.364403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.364502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.364528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.364657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.364684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.364787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.364842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.365016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.365081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.365319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.365368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.365542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.365568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.365752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.365804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.366033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.366081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.366326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.366377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.366597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.366622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.366785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.366836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.366981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.367042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.367233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.367280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.367464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.367490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.367725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.367751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.368007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.368056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.368220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.077 [2024-10-08 18:43:57.368269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.077 qpair failed and we were unable to recover it. 00:33:29.077 [2024-10-08 18:43:57.368503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.368552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.368799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.368826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.369009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.369061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.369264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.369307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.369429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.369454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.369612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.369637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.369870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.369896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.370095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.370150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.370345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.370395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.370560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.370585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.370789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.370840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.371017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.371062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.371198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.371253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.371421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.371447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.371624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.371669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.371802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.371852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.371963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.371989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.372130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.372187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.372431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.372483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.372678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.372705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.372944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.372969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.373176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.373227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.373399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.373449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.373613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.373639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.373811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.373837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.374098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.374147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.374368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.374425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.374581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.374607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.374777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.374804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.375039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.375089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.375271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.375318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.375493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.375518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.375685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.375712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.375884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.375935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.376140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.078 [2024-10-08 18:43:57.376188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.078 qpair failed and we were unable to recover it. 00:33:29.078 [2024-10-08 18:43:57.376460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.376509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.376704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.376730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.376945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.376994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.377255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.377303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.377456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.377481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.377677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.377738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.377957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.378017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.378159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.378219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.378443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.378491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.378685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.378732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.378974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.379035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.379236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.379287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.379473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.379497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.379702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.379755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.380007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.380060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.380290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.380338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.380557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.380582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.380787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.380837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.381036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.381088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.381346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.381396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.381616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.381641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.381861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.381905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.382161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.382209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.382465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.382513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.382628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.382693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.382952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.383013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.383221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.383272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.383544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.383595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.383782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.383807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.383994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.384046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.384239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.384290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.384407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.384435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.384575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.384600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.384751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.384803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.385025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.385074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.385283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.385335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.385547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.385571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.385760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.385830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.386055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.386103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.386364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.079 [2024-10-08 18:43:57.386413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.079 qpair failed and we were unable to recover it. 00:33:29.079 [2024-10-08 18:43:57.386537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.386561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.386839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.386890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.387138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.387188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.387379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.387429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.387572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.387597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.387828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.387878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.388114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.388164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.388344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.388386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.388614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.388659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.388826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.388867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.389042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.389094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.389259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.389303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.389515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.389539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.389751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.389777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.389927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.389953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.390198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.390223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.390407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.390432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.390564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.390589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.390749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.390802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.391007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.391061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.391319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.391369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.391540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.391564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.391813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.391863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.392066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.392116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.392368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.392419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.392614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.392639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.392878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.392929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.393117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.393162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.393366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.393407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.393597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.393622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.393784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.393835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.394015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.394074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.394309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.394360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.394602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.394641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.394814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.394843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.394988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.395047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.395297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.395346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.395488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.395512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.395749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.395817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.080 [2024-10-08 18:43:57.396018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.080 [2024-10-08 18:43:57.396068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.080 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.396218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.396243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.396413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.396438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.396666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.396692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.396885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.396935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.397101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.397149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.397398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.397450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.397648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.397678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.397894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.397951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.398194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.398243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.398430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.398480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.398672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.398698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.398864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.398915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.399152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.399194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.399397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.399439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.399631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.399669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.399925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.399980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.400118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.400170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.400397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.400447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.400633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.400680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.400824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.400850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.401031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.401084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.401338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.401385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.401577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.401602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.401812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.401838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.402024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.402075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.402230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.402279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.402482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.402522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.402663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.402689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.402883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.402933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.403192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.403243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.403386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.403438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.403590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.403615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.403790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.403854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.403996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.404035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.404188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.404217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.404454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.404479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.404734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.404761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.404929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.404955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.405117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.405170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.081 qpair failed and we were unable to recover it. 00:33:29.081 [2024-10-08 18:43:57.405368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.081 [2024-10-08 18:43:57.405412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.405609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.405634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.405888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.405940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.406113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.406163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.406331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.406376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.406575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.406599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.406790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.406843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.406989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.407039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.407176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.407229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.407445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.407495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.407741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.407766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.407996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.408021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.408242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.408290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.408488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.408512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.408687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.408713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.408956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.409001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.409226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.409275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.409520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.409544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.409725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.409790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.409977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.410032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.410191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.410242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.410393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.410418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.410527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.410552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.410688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.410715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.410898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.410925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.411116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.411143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.411388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.411412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.411628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.411674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.411835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.411886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.412135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.412185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.412380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.412430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.412687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.412713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.412880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.412926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.413077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.413127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.413345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.413397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.082 [2024-10-08 18:43:57.413538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.082 [2024-10-08 18:43:57.413562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.082 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.413695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.413721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.413895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.413947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.414168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.414220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.414436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.414485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.414736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.414763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.414951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.415009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.415166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.415190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.415394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.415446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.415578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.415602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.415871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.415921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.416109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.416173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.416426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.416475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.416680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.416705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.416856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.416914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.417137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.417188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.417344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.417389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.417556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.417580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.417763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.417811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.418038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.418085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.418250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.418297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.418478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.418502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.418697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.418723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.418913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.418963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.419126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.419180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.419427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.419476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.419725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.419751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.420008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.420057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.420240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.420289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.420537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.420562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.420823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.420873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.421128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.421178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.421428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.421477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.421679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.421704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.421850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.421902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.422047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.422098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.422302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.422351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.422535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.422560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.422779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.422831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.423073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.083 [2024-10-08 18:43:57.423124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.083 qpair failed and we were unable to recover it. 00:33:29.083 [2024-10-08 18:43:57.423264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.423317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.423490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.423514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.423765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.423817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.424032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.424085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.424208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.424273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.424410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.424450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.424618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.424663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.424913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.424965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.425112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.425162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.425392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.425443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.425644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.425690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.425842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.425893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.426169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.426221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.426410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.426462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.426674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.426701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.426939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.426982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.427210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.427264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.427497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.427550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.427668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.427694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.427916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.427971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.428204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.428255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.428403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.428452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.428630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.428659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.428789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.428847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.429124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.429179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.429368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.429417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.429654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.429680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.429863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.429889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.430152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.430203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.430369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.430420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.430555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.430580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.430708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.430735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.430918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.430972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.431148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.431198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.431383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.431434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.431673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.431698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.431860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.431905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.432053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.432108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.432374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.432425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.432535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.084 [2024-10-08 18:43:57.432560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.084 qpair failed and we were unable to recover it. 00:33:29.084 [2024-10-08 18:43:57.432762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.432822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.433014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.433063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.433255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.433307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.433512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.433536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.433671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.433696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.433951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.434003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.434148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.434198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.434415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.434463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.434663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.434689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.434801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.434828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.435013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.435064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.435287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.435336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.435539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.435564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.435735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.435761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.435983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.436034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.436250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.436297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.436536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.436560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.436678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.436704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.436887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.436943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.437152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.437201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.437420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.437468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.437624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.437664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.437803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.437842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.438025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.438078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.438279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.438339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.438585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.438609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.438859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.438886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.439052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.439108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.439356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.439404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.439609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.439634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.439851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.439877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.440070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.440121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.440320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.440370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.440572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.440596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.440853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.440902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.441158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.441206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.441389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.441437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.441562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.441586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.441809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.441861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.085 [2024-10-08 18:43:57.442081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.085 [2024-10-08 18:43:57.442135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.085 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.442397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.442447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.442572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.442596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.442779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.442846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.443029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.443078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.443341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.443389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.443574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.443598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.443878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.443929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.444081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.444132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.444372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.444423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.444596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.444621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.444864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.444891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.445044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.445094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.445258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.445307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.445460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.445484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.445696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.445723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.445895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.445921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.446058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.446145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.446329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.446381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.446581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.446605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.446807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.446833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.447016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.447066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.447227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.447271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.447523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.447575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.447724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.447785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.447983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.448037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.448247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.448299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.448514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.448538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.448707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.448766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.449040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.449090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.449233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.449284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.449438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.449468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.449662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.449687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.449859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.449912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.450143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.086 [2024-10-08 18:43:57.450195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.086 qpair failed and we were unable to recover it. 00:33:29.086 [2024-10-08 18:43:57.450387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.450437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.450579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.450604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.450775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.450832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.451093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.451144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.451305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.451356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.451579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.451604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.451821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.451847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.452114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.452165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.452302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.452350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.452590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.452615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.452887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.452913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.453103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.453153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.453333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.453383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.453559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.453583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.453772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.453831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.454023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.454072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.454242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.454294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.454447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.454472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.454606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.454642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.454906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.454958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.455086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.455173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.455326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.455380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.455511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.455551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.455762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.455788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.456021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.456046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.456278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.456303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.456420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.456444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.456600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.456625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.456844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.456894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.457036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.457086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.457308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.457359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.457578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.457603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.457871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.457918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.458142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.458193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.458434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.458486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.458733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.458782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.459042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.459093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.459253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.459298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.459439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.459464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.087 [2024-10-08 18:43:57.459732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.087 [2024-10-08 18:43:57.459759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.087 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.460009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.460054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.460249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.460306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.460535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.460559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.460772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.460822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.461025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.461077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.461218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.461242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.461445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.461502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.461703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.461728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.461878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.461931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.462112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.462153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.462319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.462373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.462598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.462623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.462802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.462828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.462941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.462999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.463146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.463200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.463447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.463471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.463614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.463667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.463814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.463872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.464033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.464057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.464187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.464246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.464412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.464437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.464583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.464622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.464766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.464793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.465024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.465049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.465255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.465280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.465457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.465482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.465740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.465766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.465987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.466032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.466210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.466262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.466430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.466473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.466664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.466708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.466955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.467006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.467206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.467255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.467421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.467472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.088 qpair failed and we were unable to recover it. 00:33:29.088 [2024-10-08 18:43:57.467647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.088 [2024-10-08 18:43:57.467677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.467822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.467848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.468065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.468113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.468291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.468343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.468580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.468604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.468781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.468808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.468955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.469007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.469190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.469238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.469446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.469498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.469717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.469744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.469934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.469986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.470137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.470165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.470384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.470408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.470637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.470682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.470891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.470952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.471131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.471184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.471440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.471489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.471755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.471808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.472040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.472100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.472223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.472276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.472453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.472478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.472590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.472630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.472861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.472912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.473164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.473215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.473376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.473428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.473639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.473685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.473926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.473978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.474120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.474189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.474387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.474438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.474680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.474706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.474903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.474952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.475210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.475258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.475420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.475470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.475709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.475734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.475942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.475995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.476161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.476212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.476472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.476525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.476661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.476688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.476885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.476936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.477120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.089 [2024-10-08 18:43:57.477170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.089 qpair failed and we were unable to recover it. 00:33:29.089 [2024-10-08 18:43:57.477322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.477373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.477556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.477581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.477717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.477766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.477969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.478031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.478226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.478274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.478408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.478433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.478602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.478657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.478866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.478918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.479079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.479126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.479381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.479431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.479646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.479678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.479806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.479832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.480069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.480118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.480314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.480374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.480568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.480592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.480806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.480833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.480980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.481047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.481304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.481351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.481590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.481614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.481860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.481886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.482122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.482173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.482362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.482414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.482619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.482644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.482851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.482876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.483123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.483172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.483319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.483371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.483532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.483556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.483756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.483821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.484033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.484077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.484293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.484343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.484557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.484581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.484828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.484880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.485149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.485212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.485392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.485440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.485686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.485727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.485899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.485951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.486084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.486174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.486341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.486393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.486582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.090 [2024-10-08 18:43:57.486607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.090 qpair failed and we were unable to recover it. 00:33:29.090 [2024-10-08 18:43:57.486858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.486910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.487172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.487224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.487474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.487524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.487661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.487688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.487824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.487880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.488064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.488114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.488392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.488439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.488634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.488689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.488873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.488898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.489057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.489106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.489272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.489319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.489502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.489527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.489721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.489762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.489967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.489992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.490143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.490168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.490421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.490445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.490573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.490597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.490860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.490911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.491155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.491205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.491444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.491494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.491726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.491752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.491936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.491995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.492197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.492246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.492508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.492561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.492699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.492725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.492859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.492913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.493127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.493176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.493386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.493435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.493627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.493673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.493826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.493879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.494044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.494095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.494274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.494326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.494464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.494489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.494699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.494725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.494950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.495002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.091 [2024-10-08 18:43:57.495257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.091 [2024-10-08 18:43:57.495306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.091 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.495480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.495504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.495671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.495702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.495844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.495899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.496134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.496183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.496459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.496510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.496709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.496734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.496937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.496986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.497153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.497202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.497420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.497468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.497660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.497686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.497887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.497940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.498145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.498194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.498446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.498495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.498692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.498736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.498909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.498961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.499163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.499214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.499425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.499475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.499747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.499800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.500038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.500088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.500283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.500346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.500581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.500606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.500865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.500915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.501059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.501112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.501285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.501337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.501502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.501526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.501710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.501737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.501888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.501937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.502065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.502090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.502279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.502314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.502594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.502619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.502813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.502839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.503104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.503153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.503303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.503356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.503603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.503628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.503848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.503895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.504098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.504142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.504361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.504410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.504589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.504614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.504798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.504848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.092 [2024-10-08 18:43:57.505000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.092 [2024-10-08 18:43:57.505056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.092 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.505239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.505288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.505471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.505500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.505646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.505678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.505906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.505955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.506188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.506240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.506444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.506496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.506750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.506799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.507035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.507079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.507280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.507322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.507497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.507521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.507673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.507715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.507923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.507980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.508163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.508211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.508420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.508445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.508601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.508626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.508854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.508907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.509141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.509189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.509376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.509426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.509613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.509658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.509830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.509882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.510068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.510119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.510374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.510426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.510576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.510606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.510800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.510850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.511110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.511160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.511419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.511466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.511706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.511733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.511880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.511933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.512194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.512245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.512419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.512469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.512645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.512698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.512922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.512947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.513199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.513249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.513438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.513487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.513682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.513724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.513895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.513942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.514172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.514220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.514487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.514538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.514755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.514781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.093 qpair failed and we were unable to recover it. 00:33:29.093 [2024-10-08 18:43:57.514986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.093 [2024-10-08 18:43:57.515048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.515276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.515325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.515489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.515518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.515777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.515827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.516052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.516105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.516334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.516385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.516622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.516667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.516920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.516984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.517243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.517292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.517474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.517524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.517739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.517764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.517966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.518029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.518230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.518281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.518474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.518499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.518698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.518745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.518982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.519038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.519280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.519333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.519480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.519510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.519769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.519822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.520006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.520059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.520209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.520259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.520362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.520387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.520551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.520577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.520787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.520838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.521052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.521100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.521302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.521353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.521601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.521625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.521835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.521884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.522110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.522161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.522350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.522392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.522585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.522610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.522792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.522843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.523033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.523084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.523316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.523358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.523558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.523582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.523715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.523740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.523871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.523923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.524062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.524112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.524265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.524317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.094 qpair failed and we were unable to recover it. 00:33:29.094 [2024-10-08 18:43:57.524563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.094 [2024-10-08 18:43:57.524589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.524733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.524786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.524930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.524981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.525109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.525135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.525266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.525294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.525530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.525556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.525755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.525810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.525991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.526042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.526208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.526235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.526407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.526434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.526585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.526611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.526739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.526830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.527026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.527085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.527293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.527344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.527570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.527596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.527765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.527817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.528063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.528110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.528290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.528338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.528465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.528491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.528735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.528762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.528900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.528959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.529139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.529185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.529327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.529372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.529476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.529502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.529625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.529656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.529820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.529847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.530027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.530074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.530236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.530283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.530478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.530504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.530655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.530701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.530900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.530931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.531122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.531169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.095 qpair failed and we were unable to recover it. 00:33:29.095 [2024-10-08 18:43:57.531452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.095 [2024-10-08 18:43:57.531478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.531727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.531763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.531912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.531962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.532190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.532236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.532411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.532457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.532616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.532643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.532816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.532844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.532976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.533043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.533256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.533302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.533461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.533488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.533617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.533643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.533788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.533835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.533992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.534026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.534188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.534214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.534405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.534431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.534583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.534609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.534854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.534898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.535052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.535098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.535280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.535341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.535589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.535616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.535748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.535798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.535974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.536036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.536252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.536302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.536523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.536549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.536783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.536830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.537025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.537072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.537207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.537258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.537449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.537498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.537663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.537730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.537964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.538013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.538242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.538288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.538456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.538483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.538653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.538680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.538846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.538873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.539076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.539127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.539369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.539421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.539571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.539598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.539797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.539824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.539954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.096 [2024-10-08 18:43:57.540007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.096 qpair failed and we were unable to recover it. 00:33:29.096 [2024-10-08 18:43:57.540250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.540296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.540502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.540563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.540779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.540830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.541016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.541066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.541258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.541305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.541476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.541503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.541636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.541695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.541832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.541879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.542033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.542079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.542246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.542298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.542465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.542492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.542733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.542782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.542897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.542945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.543105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.543132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.543351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.543377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.543559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.543585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.543761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.543810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.544012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.544059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.544211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.544262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.544445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.544496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.544727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.544754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.544941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.544989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.545181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.545228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.545442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.545489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.545627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.545659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.545789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.545815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.546032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.546100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.546231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.546278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.546395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.546472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.546661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.546688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.546829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.546856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.546980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.547030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.547199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.547226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.547397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.547452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.547703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.547730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.547980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.548030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.548217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.548280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.548507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.548554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.097 [2024-10-08 18:43:57.548772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.097 [2024-10-08 18:43:57.548799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.097 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.548968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.549023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.549138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.549189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.549405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.549455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.549606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.549632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.549825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.549889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.550076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.550129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.550293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.550339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.550490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.550517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.550707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.550756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.550950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.550996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.551194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.551241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.551405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.551431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.551591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.551617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.551779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.551827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.552067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.552114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.552323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.552370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.552556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.552582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.552742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.552797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.553012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.553064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.553227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.553274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.553508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.553535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.553676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.553703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.553853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.553899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.554139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.554191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.554363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.554416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.554586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.554612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.554748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.554795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.554971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.554998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.555204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.555248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.555410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.555456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.555604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.555630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.555761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.555810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.556037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.556063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.556314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.556370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.556533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.556559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.556711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.556791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.556999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.557078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.557280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.557327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.557465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.098 [2024-10-08 18:43:57.557492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.098 qpair failed and we were unable to recover it. 00:33:29.098 [2024-10-08 18:43:57.557619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.557645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.557856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.557907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.558034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.558082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.558339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.558389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.558559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.558585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.558759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.558806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.558966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.559013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.559196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.559243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.559406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.559453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.559679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.559705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.559896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.559952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.560188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.560235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.560490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.560548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.560752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.560805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.560983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.561032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.561174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.561222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.561408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.561455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.561616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.561642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.561887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.561937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.562067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.562112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.562251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.562297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.562464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.562491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.562648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.562680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.562871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.562921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.563056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.563134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.563339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.563389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.563572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.563598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.563746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.563772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.563908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.563945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.564130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.564188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.564375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.564419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.564616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.564643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.564890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.564933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.565152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.565198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.565333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.565383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.565536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.565571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.565774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.565826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.566102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.566153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.566337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.099 [2024-10-08 18:43:57.566384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.099 qpair failed and we were unable to recover it. 00:33:29.099 [2024-10-08 18:43:57.566583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.566609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.566772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.566830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.566988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.567037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.567239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.567284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.567463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.567489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.567637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.567669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.567891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.567943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.568142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.568189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.568431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.568478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.568644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.568676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.568853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.568907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.569123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.569182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.569405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.569451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.569702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.569738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.569882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.569909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.570049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.570097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.570231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.570278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.570419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.570470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.570691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.570718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.570847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.570873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.571016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.571043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.571139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.571165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.571334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.571361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.571460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.571486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.571623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.571655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.571776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.571828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.571955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.571981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.572137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.572164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.572288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.572314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.572466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.572492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.572615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.572642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.572852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.572879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.573070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.100 [2024-10-08 18:43:57.573096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.100 qpair failed and we were unable to recover it. 00:33:29.100 [2024-10-08 18:43:57.573256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.101 [2024-10-08 18:43:57.573283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.101 qpair failed and we were unable to recover it. 00:33:29.101 [2024-10-08 18:43:57.573418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.101 [2024-10-08 18:43:57.573445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.101 qpair failed and we were unable to recover it. 00:33:29.101 [2024-10-08 18:43:57.573677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.101 [2024-10-08 18:43:57.573704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.101 qpair failed and we were unable to recover it. 00:33:29.101 [2024-10-08 18:43:57.573876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.101 [2024-10-08 18:43:57.573924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.101 qpair failed and we were unable to recover it. 00:33:29.101 [2024-10-08 18:43:57.574075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.101 [2024-10-08 18:43:57.574126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.101 qpair failed and we were unable to recover it. 00:33:29.101 [2024-10-08 18:43:57.574345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.101 [2024-10-08 18:43:57.574395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.101 qpair failed and we were unable to recover it. 00:33:29.101 [2024-10-08 18:43:57.574562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.101 [2024-10-08 18:43:57.574589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.101 qpair failed and we were unable to recover it. 00:33:29.101 [2024-10-08 18:43:57.574761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.101 [2024-10-08 18:43:57.574809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.101 qpair failed and we were unable to recover it. 00:33:29.101 [2024-10-08 18:43:57.574972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.101 [2024-10-08 18:43:57.575018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.101 qpair failed and we were unable to recover it. 00:33:29.101 [2024-10-08 18:43:57.575221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.101 [2024-10-08 18:43:57.575269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.101 qpair failed and we were unable to recover it. 00:33:29.101 [2024-10-08 18:43:57.575429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.101 [2024-10-08 18:43:57.575456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.101 qpair failed and we were unable to recover it. 00:33:29.101 [2024-10-08 18:43:57.575667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.101 [2024-10-08 18:43:57.575694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.101 qpair failed and we were unable to recover it. 00:33:29.101 [2024-10-08 18:43:57.575819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.575845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.576003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.576054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.576166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.576214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.576371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.576419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.576586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.576612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.576870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.576918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.577074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.577137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.577331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.577382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.577512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.577539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.577661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.577689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.577841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.577894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.578025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.578052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.578199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.578226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.578351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.578379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.578532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.578559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.578794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.380 [2024-10-08 18:43:57.578822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.380 qpair failed and we were unable to recover it. 00:33:29.380 [2024-10-08 18:43:57.578973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.579000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.579117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.579143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.579247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.579274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.579438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.579464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.579648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.579681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.579814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.579866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.580046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.580091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.580317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.580364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.580598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.580625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.580877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.580926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.581083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.581148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.581284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.581331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.581462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.581489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.581695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.581740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.581881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.581917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.582099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.582147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.582389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.582443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.582604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.582631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.582828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.582876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.583017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.583064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.583236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.583283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.583484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.583515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.583641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.583675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.583777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.583828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.583993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.584065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.584275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.584311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.584529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.584555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.584733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.584782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.584977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.585024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.585265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.585314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.585503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.585530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.585729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.585779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.585949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.585996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.586180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.586229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.586364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.586392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.586616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.586643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.586906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.586949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.587144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.587198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.587340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.587386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.587560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.381 [2024-10-08 18:43:57.587586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.381 qpair failed and we were unable to recover it. 00:33:29.381 [2024-10-08 18:43:57.587764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.587812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.588041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.588084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.588261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.588310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.588544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.588570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.588730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.588779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.588979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.589025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.589267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.589314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.589467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.589504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.589667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.589714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.589945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.589989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.590218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.590265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.590417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.590463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.590643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.590676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.590817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.590868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.591111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.591168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.591402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.591451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.591658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.591684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.591808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.591834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.592036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.592081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.592268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.592314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.592494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.592539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.592695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.592729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.592868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.592918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.593143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.593193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.593370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.593422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.593624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.593656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.593883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.593938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.594078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.594125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.594294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.594341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.594500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.594527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.594719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.594772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.594996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.595032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.595214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.595263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.595403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.595429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.595616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.595642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.595953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.596011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.596200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.596251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.596446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.596499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.596687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.596731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.382 [2024-10-08 18:43:57.596950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.382 [2024-10-08 18:43:57.596998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.382 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.597137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.597184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.597316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.597365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.597533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.597559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.597715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.597771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.597921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.597985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.598161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.598187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.598347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.598412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.598622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.598648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.598874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.598926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.599129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.599181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.599443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.599491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.599615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.599641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.599783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.599810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.599952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.599998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.600164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.600217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.600363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.600414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.600506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.600543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.600703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.600730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.600825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.600851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.600975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.601001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.601185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.601212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.601438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.601468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.601682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.601709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.601863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.601914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.602120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.602172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.602328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.602377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.602510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.602536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.602723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.602773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.602983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.603034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.603190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.603239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.603433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.603459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.603588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.603625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.603798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.603825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.604016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.604043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.604211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.604261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.604494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.604520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.604683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.604710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.604936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.604987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.383 [2024-10-08 18:43:57.605153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.383 [2024-10-08 18:43:57.605203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.383 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.605393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.605443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.605663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.605690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.605880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.605929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.606185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.606234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.606431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.606481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.606654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.606680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.606826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.606853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.607036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.607086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.607333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.607382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.607613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.607639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.607843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.607869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.608038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.608081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.608300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.608353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.608533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.608558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.608715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.608742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.608979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.609037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.609268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.609317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.609480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.609506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.609646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.609710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.609844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.609888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.610104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.610149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.610313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.610363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.610518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.610557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.610787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.610831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.611060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.611112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.611259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.611310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.611466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.611492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.611685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.611712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.611896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.611944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.612156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.612206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.612454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.612503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.612742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.612788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.613040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.613089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.613233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.613283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.384 [2024-10-08 18:43:57.613447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.384 [2024-10-08 18:43:57.613473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.384 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.613701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.613729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.613918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.613961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.614164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.614214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.614346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.614397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.614578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.614608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.614812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.614856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.615042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.615093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.615275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.615318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.615417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.615443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.615666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.615693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.615813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.615857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.615984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.616009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.616174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.616241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.616447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.616495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.616705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.616733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.616939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.616965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.617134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.617175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.617383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.617408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.617616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.617670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.617846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.617889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.618064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.618116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.618297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.618347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.618589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.618614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.618815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.618858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.619050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.619102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.619356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.619407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.619601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.619641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.619875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.619923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.620176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.620227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.620432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.620487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.620682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.620729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.620987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.621045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.621190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.621238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.621391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.621438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.621637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.621688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.621839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.621883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.622029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.622096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.622334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.622359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.622561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.622586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.385 qpair failed and we were unable to recover it. 00:33:29.385 [2024-10-08 18:43:57.622711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.385 [2024-10-08 18:43:57.622738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.622898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.622964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.623095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.623146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.623312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.623373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.623515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.623540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.623710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.623752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.623908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.623947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.624046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.624086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.624218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.624244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.624384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.624425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.624670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.624697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.624984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.625042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.625182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.625232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.625443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.625491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.625715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.625742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.625962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.626007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.626174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.626224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.626487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.626512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.626748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.626797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.627019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.627069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.627189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.627255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.627380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.627406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.627567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.627593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.627821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.627872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.628112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.628163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.628303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.628354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.628506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.628531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.628723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.628788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.629005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.629034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.629227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.629278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.629415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.629440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.629654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.629680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.629910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.629961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.630139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.630191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.630333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.630386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.630600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.630625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.630895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.630944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.631088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.631140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.631337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.631389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.631552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.631577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.631771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.386 [2024-10-08 18:43:57.631823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.386 qpair failed and we were unable to recover it. 00:33:29.386 [2024-10-08 18:43:57.632080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.632132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.632287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.632339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.632495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.632520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.632678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.632706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.632971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.633024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.633167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.633217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.633401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.633426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.633673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.633699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.633931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.633983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.634179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.634227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.634471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.634521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.634705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.634731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.634948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.635000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.635203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.635251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.635505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.635554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.635763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.635822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.636017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.636065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.636257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.636308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.636557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.636582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.636774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.636827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.637022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.637075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.637348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.637399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.637606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.637631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.637782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.637835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.638065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.638116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.638269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.638321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.638544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.638568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.638745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.638803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.638959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.639009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.639219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.639269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.639456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.639482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.639726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.639753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.639930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.639978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.640165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.640217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.640405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.640458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.640633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.640678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.640840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.640866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.640991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.641042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.641314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.641359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.641527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.387 [2024-10-08 18:43:57.641551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.387 qpair failed and we were unable to recover it. 00:33:29.387 [2024-10-08 18:43:57.641760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.641812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.641988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.642038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.642277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.642327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.642493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.642518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.642655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.642681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.642853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.642909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.643126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.643150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.643334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.643366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.643518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.643542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.643763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.643812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.644077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.644101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.644377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.644427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.644664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.644690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.644902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.644962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.645162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.645210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.645449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.645499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.645756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.645808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.645940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.646008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.646191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.646240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.646413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.646438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.646678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.646704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.646892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.646944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.647119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.647169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.647361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.647406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.647527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.647552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.647697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.647767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.647977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.648028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.648331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.648376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.648475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.648500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.648646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.648719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.648984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.649036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.649184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.649227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.649450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.649474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.649613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.649638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.649872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.649920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.650157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.650207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.650390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.650440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.650589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.650613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.650856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.650906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.388 [2024-10-08 18:43:57.651148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.388 [2024-10-08 18:43:57.651197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.388 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.651406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.651458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.651707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.651733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.651867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.651915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.652064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.652119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.652280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.652333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.652511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.652536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.652677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.652704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.652863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.652913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.653083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.653128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.653291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.653342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.653561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.653586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.653799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.653849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.654048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.654093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.654336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.654386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.654528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.654557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.654790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.654844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.654952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.655022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.655242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.655292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.655559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.655584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.655788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.655834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.656083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.656135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.656317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.656367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.656616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.656660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.656872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.656924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.657049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.657102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.657251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.657302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.657462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.657486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.657711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.657737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.657897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.657946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.658144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.658196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.658422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.658447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.658705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.658731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.658912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.658962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.659154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.659205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.659473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.659524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.389 [2024-10-08 18:43:57.659675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.389 [2024-10-08 18:43:57.659700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.389 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.659965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.660013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.660158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.660210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.660412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.660462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.660661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.660695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.660881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.660932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.661089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.661141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.661326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.661377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.661600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.661624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.661877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.661929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.662158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.662208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.662446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.662496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.662678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.662749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.663017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.663068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.663273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.663321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.663501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.663525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.663751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.663778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.664023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.664075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.664305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.664354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.664560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.664588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.664728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.664755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.664940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.664994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.665252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.665304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.665507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.665531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.665626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.665680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.665835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.665885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.666026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.666067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.666267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.666318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.666497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.666521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.666751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.666804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.666960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.667015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.667211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.667235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.667463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.667488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.667691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.667716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.667974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.668021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.668229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.668280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.668488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.668513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.668749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.668775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.668909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.668966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.669105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.669161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.669378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.390 [2024-10-08 18:43:57.669403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.390 qpair failed and we were unable to recover it. 00:33:29.390 [2024-10-08 18:43:57.669682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.669709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.669858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.669911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.670053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.670104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.670289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.670340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.670587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.670613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.670789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.670840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.670999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.671048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.671218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.671267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.671446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.671471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.671692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.671719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.671952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.672000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.672178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.672229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.672413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.672464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.672666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.672694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.672836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.672889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.673127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.673177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.673417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.673465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.673630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.673681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.673858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.673889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.674083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.674133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.674292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.674344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.674509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.674534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.674693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.674719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.674975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.675029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.675243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.675292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.675440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.675465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.675661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.675687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.675901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.675955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.676178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.676229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.676374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.676417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.676617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.676642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.676854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.676880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.677010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.677062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.677210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.677260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.677459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.677509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.677725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.677752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.677924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.677949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.678154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.678186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.678422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.678447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.678578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.391 [2024-10-08 18:43:57.678603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.391 qpair failed and we were unable to recover it. 00:33:29.391 [2024-10-08 18:43:57.678839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.678889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.679056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.679104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.679260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.679310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.679491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.679516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.679752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.679778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.679957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.679994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.680203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.680228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.680443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.680467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.680766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.680793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.680998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.681048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.681311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.681361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.681488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.681512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.681778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.681827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.682045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.682095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.682297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.682346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.682540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.682565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.682778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.682827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.683084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.683133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.683273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.683327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.683571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.683595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.683854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.683905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.684169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.684218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.684374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.684425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.684573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.684598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.684740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.684809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.685083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.685133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.685369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.685420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.685644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.685675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.685847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.685873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.686082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.686131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.686405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.686455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.686577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.686601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.686831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.686858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.687099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.687147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.687367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.687417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.687688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.687715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.687851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.687902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.688095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.688136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.688279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.688332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.688541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.392 [2024-10-08 18:43:57.688566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.392 qpair failed and we were unable to recover it. 00:33:29.392 [2024-10-08 18:43:57.688795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.688844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.688982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.689034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.689265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.689315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.689566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.689591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.689729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.689786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.689909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.689964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.690144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.690186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.690412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.690464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.690686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.690711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.690914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.690971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.691148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.691200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.691326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.691389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.691586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.691611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.691854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.691903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.692116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.692166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.692383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.692431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.692622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.692670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.692785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.692842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.692996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.693051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.693282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.693333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.693534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.693558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.693712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.693770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.693954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.694005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.694260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.694310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.694538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.694563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.694688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.694713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.694919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.694971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.695225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.695272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.695526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.695550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.695737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.695789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.696042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.696089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.696269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.696336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.696535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.696559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.696748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.696810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.697064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.697113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.697282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.393 [2024-10-08 18:43:57.697334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.393 qpair failed and we were unable to recover it. 00:33:29.393 [2024-10-08 18:43:57.697540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.697564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.697753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.697780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.697889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.697945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.698132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.698176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.698366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.698409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.698590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.698615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.698824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.698878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.699033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.699119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.699298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.699346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.699584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.699608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.699801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.699857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.700072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.700123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.700260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.700312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.700549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.700573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.700775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.700829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.701022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.701078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.701346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.701397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.701645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.701677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.701835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.701886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.702147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.702199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.702384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.702433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.702576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.702601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.702801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.702831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.703024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.703072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.703260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.703311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.703517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.703542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.703751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.703810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.704057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.704109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.704302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.704362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.704565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.704590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.704792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.704844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.704975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.705038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.705234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.705285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.705500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.705525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.705711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.705737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.705888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.705942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.706139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.706184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.706402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.706452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.706636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.706667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.706860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.394 [2024-10-08 18:43:57.706913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.394 qpair failed and we were unable to recover it. 00:33:29.394 [2024-10-08 18:43:57.707124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.707175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.707429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.707477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.707675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.707701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.707972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.708021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.708211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.708254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.708524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.708577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.708765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.708792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.708983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.709033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.709298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.709347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.709530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.709555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.709721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.709772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.709948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.709991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.710219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.710269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.710428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.710453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.710626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.710655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.710917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.710967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.711158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.711207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.711431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.711481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.711727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.711753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.711885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.711971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.712158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.712207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.712468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.712518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.712668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.712698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.712893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.712944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.713106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.713155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.713401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.713451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.713585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.713609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.713818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.713875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.714069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.714118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.714310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.714360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.714506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.714531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.714680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.714706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.714911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.714970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.715138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.715187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.715364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.715389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.715502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.715542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.715681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.715709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.715925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.715982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.716186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.716235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.716437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.716465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.395 qpair failed and we were unable to recover it. 00:33:29.395 [2024-10-08 18:43:57.716607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.395 [2024-10-08 18:43:57.716631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.716936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.716990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.717176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.717218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.717451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.717500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.717727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.717754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.717964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.718018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.718252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.718302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.718502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.718527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.718704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.718730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.718846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.718900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.719039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.719088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.719220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.719278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.719382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.719407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.719551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.719576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.719709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.719735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.719862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.719891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.720067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.720092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.720201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.720241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.720384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.720410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.720587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.720627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.720835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.720890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.721061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.721110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.721264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.721316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.721478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.721503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.721685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.721710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.721877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.721929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.722066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.722119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.722281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.722306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.722453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.722493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.722654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.722681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.722809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.722836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.722978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.723018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.723159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.723184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.723350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.723391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.723561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.723585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.723727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.723769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.723906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.723960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.724114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.724163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.724308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.396 [2024-10-08 18:43:57.724344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.396 qpair failed and we were unable to recover it. 00:33:29.396 [2024-10-08 18:43:57.724484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.724525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.724664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.724691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.724841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.724901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.725111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.725161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.725325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.725349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.725483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.725524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.725671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.725712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.725820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.725846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.726002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.726028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.726166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.726207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.726342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.726383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.726545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.726585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.726724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.726750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.726919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.726959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.727117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.727142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.727245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.727271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.727401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.727426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.727589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.727615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.727766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.727793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.727914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.727941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.728080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.728120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.728247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.728273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.728426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.728452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.728585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.728629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.728767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.728794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.728880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.728906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.729057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.729083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.729254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.729279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.729449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.729474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.729644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.729675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.729799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.729858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.730038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.730087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.730241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.730290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.730421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.730460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.730596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.730621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.730728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.397 [2024-10-08 18:43:57.730755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.397 qpair failed and we were unable to recover it. 00:33:29.397 [2024-10-08 18:43:57.730904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.730962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.731115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.731140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.731275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.731315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.731498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.731523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.731687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.731713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.731833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.731875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.731994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.732019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.732144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.732169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.732308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.732334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.732509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.732534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.732665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.732706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.732854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.732904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.733026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.733084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.733198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.733224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.733398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.733424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.733545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.733570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.733700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.733727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.733828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.733854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.733974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.733999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.734179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.734204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.734366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.734406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.734533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.734573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.734721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.734775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.734949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.734974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.735114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.735171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.735340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.735365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.735537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.735562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.735750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.735806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.735967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.736018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.736194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.736244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.736387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.736412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.736574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.736614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.736769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.736822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.736986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.737011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.737175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.737200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.737336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.737376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.737560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.737585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.737688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.737714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.398 [2024-10-08 18:43:57.737903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.398 [2024-10-08 18:43:57.737967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.398 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.738113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.738164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.738305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.738359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.738531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.738557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.738751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.738808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.738953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.739004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.739153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.739203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.739339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.739378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.739513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.739538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.739688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.739716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.739822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.739849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.739966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.739992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.740115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.740141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.740264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.740306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.740483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.740509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.740675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.740702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.740823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.740879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.741025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.741074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.741233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.741258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.741430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.741455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.741627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.741656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.741777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.741818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.741988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.742038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.742170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.742220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.742354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.742395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.742531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.742558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.742732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.742789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.742912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.742953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.743050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.743075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.743249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.743293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.743467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.743492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.743664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.743691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.743862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.743914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.744090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.744144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.744279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.744305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.744441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.744468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.744579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.744605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.744781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.744849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.745002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.745042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.399 qpair failed and we were unable to recover it. 00:33:29.399 [2024-10-08 18:43:57.745185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.399 [2024-10-08 18:43:57.745224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.745385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.745424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.745558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.745598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.745788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.745814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.745941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.745982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.746109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.746149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.746288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.746314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.746485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.746525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.746665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.746691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.746823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.746878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.747070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.747121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.747268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.747292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.747432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.747472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.747594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.747619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.747797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.747854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.747987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.748041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.748219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.748267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.748419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.748444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.748622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.748647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.748845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.748871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.749031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.749056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.749199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.749224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.749363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.749389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.749484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.749510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.749660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.749702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.749835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.749861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.749997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.750037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.750166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.750192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.750364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.750404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.750562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.750602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.750717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.750749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.750911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.750937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.751099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.751124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.751263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.751289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.751422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.751447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.751617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.751665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.751819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.751846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.751968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.752008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.752159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.752184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.400 [2024-10-08 18:43:57.752316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.400 [2024-10-08 18:43:57.752342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.400 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.752484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.752509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.752623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.752648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.752808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.752834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.752993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.753032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.753180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.753205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.753344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.753369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.753487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.753512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.753682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.753709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.753833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.753859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.753981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.754007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.754139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.754179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.754337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.754362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.754547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.754571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.754705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.754731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.754871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.754924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.755073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.755098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.755259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.755299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.755423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.755467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.755648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.755680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.755821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.755870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.756008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.756059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.756195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.756220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.756385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.756424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.756569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.756594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.756775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.756802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.756891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.756916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.757078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.757118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.757266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.757306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.757440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.757465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.757605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.757630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.757801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.757828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.757951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.757978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.758117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.758142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.758330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.758370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.758468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.758507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.758669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.758696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.758876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.758929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.759081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.759131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.759269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.759308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.401 [2024-10-08 18:43:57.759447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.401 [2024-10-08 18:43:57.759488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.401 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.759600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.759625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.759827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.759879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.760028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.760078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.760226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.760275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.760418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.760458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.760591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.760616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.760767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.760793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.760941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.760967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.761130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.761170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.761370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.761395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.761642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.761688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.761825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.761851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.762003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.762044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.762192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.762217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.762356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.762420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.762662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.762689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.762840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.762903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.763164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.763224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.763411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.763461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.763665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.763692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.763834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.763860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.764024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.764076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.764234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.764286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.764458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.764503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.764662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.764704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.764833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.764887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.765001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.765027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.765269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.765294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.765472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.765497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.765684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.765725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.765867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.765918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.766074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.766126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.766361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.766409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.766562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.766588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.766780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.766832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.767076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.402 [2024-10-08 18:43:57.767121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.402 qpair failed and we were unable to recover it. 00:33:29.402 [2024-10-08 18:43:57.767369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.767417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.767518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.767543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.767742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.767792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.767989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.768042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.768290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.768340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.768476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.768501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.768655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.768682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.768810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.768872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.769048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.769074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.769216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.769271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.769517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.769543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.769741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.769768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.769866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.769906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.770085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.770110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.770296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.770321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.770490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.770515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.770702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.770728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.770858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.770914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.771119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.771177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.771324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.771349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.771492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.771519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.771765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.771826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.772087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.772137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.772380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.772431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.772621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.772674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.772788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.772845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.773096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.773139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.773282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.773308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.773480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.773506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.773707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.773734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.773871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.773928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.774041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.774100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.774301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.774358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.774548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.774575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.774759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.774809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.775010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.775066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.775191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.775245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.775463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.775489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.775718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.775745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.403 [2024-10-08 18:43:57.775873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.403 [2024-10-08 18:43:57.775922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.403 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.776069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.776124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.776264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.776290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.776420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.776446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.776573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.776598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.776759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.776795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.776955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.776999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.777138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.777184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.777347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.777395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.777557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.777584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.777721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.777778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.777882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.777917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.778044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.778091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.778280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.778306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.778398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.778424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.778581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.778607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.778793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.778842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.779024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.779066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.779206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.779257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.779390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.779416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.779540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.779566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.779668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.779695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.779808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.779863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.780003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.780051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.780205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.780255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.780367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.780393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.780516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.780542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.780670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.780696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.780834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.780881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.781013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.781039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.781168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.781194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.781318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.781344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.781463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.781489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.781611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.781636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.781695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9205f0 (9): Bad file descriptor 00:33:29.404 [2024-10-08 18:43:57.781919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.781972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.782085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.782114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.782245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.782272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.782433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.782483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.782676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.404 [2024-10-08 18:43:57.782733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.404 qpair failed and we were unable to recover it. 00:33:29.404 [2024-10-08 18:43:57.782883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.782928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.783079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.783138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.783299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.783353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.783448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.783474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.783644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.783677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.783773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.783799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.783967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.784025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.784173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.784226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.784397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.784446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.784562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.784592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.784755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.784799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.784968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.785015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.785156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.785204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.785329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.785395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.785554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.785580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.785728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.785775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.785947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.785994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.786141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.786187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.786315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.786362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.786483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.786509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.786685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.786712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.786824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.786874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.786989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.787015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.787178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.787204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.787417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.787443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.787634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.787666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.787808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.787856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.788057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.788109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.788351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.788404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.788621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.788647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.788793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.788841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.788953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.789003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.789163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.789209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.789362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.789409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.789491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.789517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.789667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.789694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.789871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.789925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.790154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.790203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.790404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.405 [2024-10-08 18:43:57.790430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.405 qpair failed and we were unable to recover it. 00:33:29.405 [2024-10-08 18:43:57.790657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.790684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.790856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.790902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.791064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.791111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.791290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.791326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.791585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.791611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.791779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.791806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.791980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.792027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.792222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.792269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.792395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.792444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.792679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.792706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.792871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.792930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.793127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.793175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.793415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.793481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.793635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.793669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.793822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.793848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.794039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.794085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.794272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.794321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.794501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.794527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.794735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.794786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.794998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.795045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.795144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.795192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.795382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.795428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.795619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.795645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.795787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.795836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.796063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.796114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.796309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.796360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.796503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.796530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.796670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.796702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.796841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.796889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.797025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.797072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.797210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.797256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.797376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.797402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.797528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.797554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.797655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.797682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.797796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.797822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.406 qpair failed and we were unable to recover it. 00:33:29.406 [2024-10-08 18:43:57.797973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.406 [2024-10-08 18:43:57.797999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.798134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.798159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.798297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.798323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.798446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.798473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.798592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.798618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.798769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.798796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.798918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.798953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.799049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.799075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.799217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.799243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.799370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.799396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.799534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.799560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.799708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.799735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.799856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.799882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.800022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.800048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.800185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.800212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.800339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.800365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.800571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.800598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.800766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.800793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.800900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.800926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.801040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.801066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.801169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.801195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.801317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.801343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.801470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.801496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.801646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.801679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.801797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.801823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.802057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.802083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.802263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.802290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.802468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.802494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.802683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.802710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.802867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.802894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.803033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.803080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.803239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.803285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.803463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.803510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.803717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.803754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.803967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.804032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.804146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.804203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.804398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.804454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.804556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.804582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.804751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.804801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.804976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.407 [2024-10-08 18:43:57.805023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.407 qpair failed and we were unable to recover it. 00:33:29.407 [2024-10-08 18:43:57.805214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.805250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.805413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.805439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.805620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.805662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.805843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.805892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.806105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.806151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.806281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.806329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.806487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.806520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.806721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.806756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.806936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.806962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.807123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.807179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.807345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.807371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.807501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.807527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.807722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.807749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.807870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.807896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.808009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.808035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.808190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.808223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.808347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.808373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.808524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.808550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.808737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.808764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.808850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.808876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.809025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.809051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.809170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.809205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.809409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.809436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.809596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.809622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.809738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.809793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.809905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.809969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.810153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.810200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.810322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.810348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.810479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.810505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.810685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.810715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.810848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.810896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.811054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.811080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.811245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.811282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.811513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.811539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.811733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.811780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.811960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.812008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.812226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.812287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.812384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.812410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.812559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.812586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.812718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.408 [2024-10-08 18:43:57.812768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.408 qpair failed and we were unable to recover it. 00:33:29.408 [2024-10-08 18:43:57.812866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.812901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.813045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.813089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.813271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.813322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.813500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.813526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.813746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.813783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.814044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.814080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.814257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.814307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.814411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.814437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.814640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.814673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.814837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.814884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.815052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.815102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.815313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.815340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.815438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.815464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.815645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.815679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.815831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.815899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.816086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.816132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.816310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.816357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.816586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.816613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.816772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.816824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.816935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.816986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.817130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.817184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.817376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.817426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.817516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.817545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.817673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.817700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.817799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.817826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.817924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.817951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.818097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.818122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.818329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.818356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.818461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.818487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.818589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.818616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.818796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.818823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.818942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.818991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.819103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.819130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.819368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.819394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.819558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.819584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.819755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.819805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.820032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.820079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.820207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.820262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.820401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.820428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.820559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.409 [2024-10-08 18:43:57.820585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.409 qpair failed and we were unable to recover it. 00:33:29.409 [2024-10-08 18:43:57.820724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.820771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.821048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.821095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.821241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.821294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.821452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.821479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.821686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.821713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.821880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.821928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.822038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.822086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.822210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.822256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.822453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.822479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.822637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.822677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.822783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.822835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.822949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.823001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.823277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.823324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.823496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.823522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.823645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.823677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.823875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.823923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.824064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.824114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.824261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.824309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.824429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.824455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.824683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.824710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.824947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.825001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.825232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.825284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.825484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.825520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.825721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.825779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.825962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.826015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.826165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.826217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.826371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.826418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.826530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.826557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.826707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.826743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.826933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.826960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.827137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.827163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.827304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.827354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.827513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.827539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.827661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.827688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.827822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.827870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.827996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.828043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.828168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.828195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.828351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.828377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.410 [2024-10-08 18:43:57.828490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.410 [2024-10-08 18:43:57.828516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.410 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.828604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.828630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.828760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.828787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.828905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.828931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.829082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.829112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.829276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.829302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.829456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.829482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.829655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.829682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.829830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.829878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.830034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.830085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.830247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.830291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.830516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.830543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.830671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.830698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.830833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.830880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.831009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.831059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.831303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.831354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.831502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.831533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.831664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.831691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.831884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.831933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.832163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.832210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.832311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.832338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.832495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.832521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.832736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.832790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.832968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.833014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.833192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.833228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.833420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.833446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.833608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.833634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.833766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.833814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.833987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.834033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.834264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.834291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.834427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.834452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.834572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.834598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.834766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.834814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.834976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.835002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.835148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.835193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.835318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.835345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.411 [2024-10-08 18:43:57.835494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.411 [2024-10-08 18:43:57.835520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.411 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.835690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.835716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.835906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.835933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.836166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.836213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.836385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.836411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.836529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.836556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.836752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.836801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.836967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.837014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.837182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.837233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.837404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.837430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.837581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.837607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.837746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.837792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.837946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.837993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.838182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.838228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.838450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.838476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.838635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.838678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.838815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.838862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.838997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.839045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.839207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.839253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.839409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.839456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.839705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.839742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.839918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.839962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.840093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.840149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.840368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.840395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.840581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.840607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.840761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.840788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.840919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.840965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.841174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.841225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.841459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.841503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.841760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.841807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.842020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.842068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.842312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.842358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.842495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.842522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.842747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.842791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.842964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.843011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.843151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.843179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.843276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.843302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.843436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.843462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.843614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.843640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.843824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.843851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.412 qpair failed and we were unable to recover it. 00:33:29.412 [2024-10-08 18:43:57.843975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.412 [2024-10-08 18:43:57.844001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.844084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.844110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.844264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.844313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.844470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.844496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.844614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.844640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.844746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.844773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.844940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.844994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.845218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.845270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.845458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.845488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.845615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.845657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.845800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.845849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.846094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.846129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.846287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.846334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.846471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.846497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.846588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.846614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.846741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.846796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.846984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.847010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.847137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.847164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.847288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.847315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.847491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.847525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.847746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.847773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.848016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.848042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.848181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.848229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.848374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.848401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.848519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.848545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.848735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.848762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.848910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.848939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.849098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.849135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.849317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.849343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.849513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.849539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.849771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.849798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.849901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.849928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.850093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.850143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.850329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.850354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.850521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.850547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.850677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.850704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.850902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.850952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.851141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.851187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.851303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.851330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.413 [2024-10-08 18:43:57.851451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.413 [2024-10-08 18:43:57.851477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.413 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.851611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.851648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.851905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.851942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.852195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.852241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.852513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.852565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.852690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.852779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.852986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.853022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.853243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.853304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.853466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.853492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.853731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.853762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.853907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.853961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.854201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.854250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.854450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.854499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.854734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.854761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.854975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.855023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.855194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.855247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.855431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.855475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.855623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.855655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.855786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.855839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.856009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.856065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.856231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.856275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.856422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.856449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.856597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.856631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.856825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.856872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.857048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.857103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.857219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.857267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.857415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.857441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.857607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.857633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.857796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.857849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.857981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.858017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.858186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.858212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.858368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.858418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.858517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.858543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.858772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.858799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.858937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.858983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.859119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.859145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.859283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.859309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.859432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.414 [2024-10-08 18:43:57.859458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.414 qpair failed and we were unable to recover it. 00:33:29.414 [2024-10-08 18:43:57.859607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.859633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.859753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.859791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.859889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.859915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.860033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.860059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.860219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.860245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.860368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.860394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.860542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.860568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.860720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.860747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.860838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.860864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.860986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.861012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.861158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.861184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.861311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.861341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.861491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.861517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.861741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.861768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.861992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.862018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.862193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.862219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.862430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.862456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.862587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.862622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.862774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.862833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.863028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.863075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.863314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.863366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.863540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.863566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.863754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.863802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.863936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.863981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.864136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.864188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.864388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.864438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.864643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.864683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.864897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.864951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.865099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.865149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.865318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.865372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.865577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.865616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.865787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.865817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.866000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.866067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.866331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.415 [2024-10-08 18:43:57.866397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.415 qpair failed and we were unable to recover it. 00:33:29.415 [2024-10-08 18:43:57.866646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.866727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.866911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.866937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.867249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.867315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.867572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.867638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.867898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.867940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.868231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.868299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.868503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.868568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.868851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.868878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.869041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.869067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.869157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.869218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.869450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.869514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.869816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.869842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.869983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.870047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.870275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.870339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.870623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.870703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.870874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.870901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.871081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.871145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.871437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.871501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.871735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.871761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.871917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.871964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.872249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.872275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.872451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.872515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.872798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.872824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.872973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.872998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.873169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.873233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.873476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.873540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.873772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.873797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.873923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.873978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.874226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.874290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.874576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.874638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.874895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.874921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.875174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.875248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.875588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.875667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.875889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.875914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.876152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.876215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.876523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.876587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.876859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.876885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.877027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.877090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.877349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.416 [2024-10-08 18:43:57.877411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.416 qpair failed and we were unable to recover it. 00:33:29.416 [2024-10-08 18:43:57.877634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.877713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.877860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.877886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.878007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.878046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.878154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.878206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.878443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.878506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.878785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.878811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.878899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.878940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.879116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.879180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.879475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.879538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.879809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.879835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.880001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.880061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.880315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.880376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.880604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.880679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.880848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.880872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.880986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.881025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.881181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.881242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.881530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.881590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.881867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.881891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.882027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.882087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.882333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.882404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.882701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.882727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.882853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.882878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.883019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.883080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.883361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.883422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.883678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.883724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.883821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.883845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.883961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.883985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.884108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.884132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.884279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.884312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.884454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.884502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.884684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.884728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.884820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.884846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.884993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.885019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.885165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.885229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.885413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.885447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.885611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.885637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.885728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.885753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.885905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.885939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.886084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.886110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.886225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.886251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.417 [2024-10-08 18:43:57.886395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.417 [2024-10-08 18:43:57.886430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.417 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.886594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.886619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.886742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.886768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.886947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.886982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.887145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.887170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.887320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.887346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.887607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.887668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.887853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.887879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.888054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.888088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.888264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.888299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.888453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.888478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.888576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.888602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.888795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.888862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.889099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.889123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.889273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.889307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.889480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.889515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.889695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.889722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.889873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.889898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.890053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.890087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.890254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.890279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.890452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.890526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.890749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.890775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.890893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.890919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.891052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.891078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.891234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.891269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.891432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.891457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.891579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.891604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.891783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.891848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.892067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.892099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.892208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.892234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.892415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.892450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.892613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.892639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.892737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.892763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.892928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.892963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.893095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.893120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.893249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.893276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.893440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.893504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.893733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.893759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.893885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.893929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.894110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.894146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.418 [2024-10-08 18:43:57.894360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.418 [2024-10-08 18:43:57.894387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.418 qpair failed and we were unable to recover it. 00:33:29.419 [2024-10-08 18:43:57.894512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.419 [2024-10-08 18:43:57.894556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.419 qpair failed and we were unable to recover it. 00:33:29.419 [2024-10-08 18:43:57.894752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.419 [2024-10-08 18:43:57.894788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.419 qpair failed and we were unable to recover it. 00:33:29.419 [2024-10-08 18:43:57.894929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.419 [2024-10-08 18:43:57.894960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.419 qpair failed and we were unable to recover it. 00:33:29.419 [2024-10-08 18:43:57.895111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.419 [2024-10-08 18:43:57.895137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.419 qpair failed and we were unable to recover it. 00:33:29.419 [2024-10-08 18:43:57.895385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.419 [2024-10-08 18:43:57.895448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.419 qpair failed and we were unable to recover it. 00:33:29.419 [2024-10-08 18:43:57.895723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.419 [2024-10-08 18:43:57.895750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.419 qpair failed and we were unable to recover it. 00:33:29.419 [2024-10-08 18:43:57.895851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.419 [2024-10-08 18:43:57.895882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.419 qpair failed and we were unable to recover it. 00:33:29.419 [2024-10-08 18:43:57.896020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.419 [2024-10-08 18:43:57.896046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.419 qpair failed and we were unable to recover it. 00:33:29.419 [2024-10-08 18:43:57.896162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.419 [2024-10-08 18:43:57.896188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.419 qpair failed and we were unable to recover it. 00:33:29.419 [2024-10-08 18:43:57.896309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.419 [2024-10-08 18:43:57.896334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.896469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.896534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.896763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.896790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.896882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.896908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.897126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.897190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.897381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.897406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.897528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.897556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.897675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.897702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.897851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.897878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.897999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.898026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.898157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.898185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.898333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.898374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.898530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.898581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.898761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.898791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.898908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.898935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.899111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.899138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.899321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.899373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.899532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.899583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.899719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.899746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.899874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.899901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.900086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.900138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.900375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.900427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.900585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.900611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.900752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.900779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.900932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.900986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.901157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.901207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.901399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.901442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.901566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.901597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.901765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.901817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.902065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.902091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.902250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.902277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-10-08 18:43:57.902471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-10-08 18:43:57.902498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.902744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.902796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.902912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.902973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.903102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.903156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.903293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.903346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.903533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.903559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.903710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.903737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.903865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.903892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.904111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.904137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.904288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.904315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.904436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.904462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.904639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.904671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.904803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.904850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.905008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.905034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.905150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.905177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.905383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.905410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.905537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.905563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.905706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.905733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.905883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.905944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.906163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.906189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.906376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.906425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.906666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.906715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.906881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.906933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.907107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.907159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.907376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.907427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.907605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.907644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.907814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.907865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.908109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.908160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.908350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.908403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.908558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.908584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.908769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.908820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.909004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.909061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.909323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.909370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.909547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.909571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.909791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.909841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.909958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.910014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.910162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.910212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.910403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.910453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.910598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.910624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.910811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-10-08 18:43:57.910863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-10-08 18:43:57.911048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.911102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.911260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.911332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.911534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.911558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.911679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.911706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.911851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.911902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.912053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.912106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.912292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.912342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.912525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.912550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.912751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.912803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.912942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.912990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.913163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.913211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.913394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.913419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.913692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.913718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.913847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.913898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.914050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.914103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.914254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.914287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.914426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.914451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.914711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.914743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.914905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.914955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.915077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.915102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.915205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.915235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.915458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.915483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.915710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.915737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.915865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.915916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.916031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.916056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.916222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.916263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.916416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.916457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.916567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.916593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.916744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.916770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.916945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.916997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.917182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.917233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.917402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.917427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.917564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.917603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.917752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.917779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.917946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.918006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.918207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.918256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.918410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.918435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.918689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.918715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.918857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.918922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-10-08 18:43:57.919087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-10-08 18:43:57.919138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.919331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.919376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.919657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.919682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.919873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.919925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.920119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.920168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.920380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.920431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.920617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.920641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.920778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.920836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.921061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.921116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.921252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.921309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.921442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.921482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.921639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.921686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.921877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.921929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.922163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.922211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.922421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.922473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.922673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.922698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.922899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.922956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.923157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.923205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.923423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.923472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.923660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.923686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.923882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.923908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.924092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.924146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.924360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.924411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.924618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.924643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.924849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.924875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.925034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.925085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.925216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.925276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.925501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.925547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.925705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.925732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.925966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.926018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.926248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.926295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.926536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.926561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.926708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.926749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.926917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.926967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.927117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.927167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.927402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.927453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.927698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.927724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.927940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.928011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.928239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.928288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.928490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.928541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.699 qpair failed and we were unable to recover it. 00:33:29.699 [2024-10-08 18:43:57.928803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.699 [2024-10-08 18:43:57.928864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.929053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.929102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.929363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.929414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.929616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.929641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.929778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.929804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.929997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.930056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.930226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.930271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.930514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.930539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.930686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.930754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.930914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.930981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.931210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.931259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.931416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.931441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.931638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.931681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.931880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.931931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.932135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.932185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.932353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.932401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.932603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.932628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.932833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.932885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.933107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.933158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.933375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.933423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.933560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.933584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.933774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.933823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.933985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.934033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.934182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.934229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.934416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.934458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.934640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.934686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.934910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.934968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.935111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.935162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.935369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.935420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.935620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.935666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.935823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.935850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.936104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.936153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.936376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.936425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.936563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.936593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.700 [2024-10-08 18:43:57.936798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.700 [2024-10-08 18:43:57.936825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.700 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.936972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.937022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.937230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.937278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.937420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.937445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.937603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.937643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.937804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.937856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.938012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.938051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.938254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.938278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.938457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.938482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.938647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.938693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.938934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.938986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.939238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.939287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.939462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.939508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.939674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.939715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.939895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.939955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.940167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.940224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.940477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.940526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.940795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.940851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.941127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.941176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.941417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.941466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.941589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.941614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.941822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.941875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.942047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.942105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.942337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.942386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.942584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.942608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.942775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.942802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.942984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.943033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.943223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.943277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.943514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.943539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.943781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.943831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.944075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.944124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.944343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.944394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.944648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.944678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.944893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.944917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.945126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.945175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.945404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.945452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.945591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.945615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.945787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.945826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.946033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.946084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.946271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.701 [2024-10-08 18:43:57.946319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.701 qpair failed and we were unable to recover it. 00:33:29.701 [2024-10-08 18:43:57.946548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.946572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.946785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.946812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.946992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.947043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.947272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.947319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.947508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.947533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.947628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.947676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.947852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.947911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.948112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.948163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.948381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.948421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.948591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.948622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.948787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.948840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.949051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.949099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.949321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.949371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.949504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.949529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.949679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.949706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.949894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.949957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.950222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.950268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.950454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.950478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.950604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.950643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.950896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.950948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.951225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.951293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.951439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.951490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.951713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.951763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.951922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.951974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.952151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.952199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.952348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.952401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.952613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.952637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.952818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.952875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.953077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.953102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.953335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.953386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.953594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.953618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.953877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.953928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.954126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.954177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.954370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.954420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.954564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.954588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.954779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.954838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.954984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.955036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.955284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.955334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.955472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.955497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.702 [2024-10-08 18:43:57.955698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.702 [2024-10-08 18:43:57.955750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.702 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.955944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.955994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.956151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.956201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.956391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.956415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.956615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.956639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.956818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.956883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.957038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.957094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.957246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.957294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.957433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.957478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.957677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.957704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.957865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.957916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.958121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.958146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.958373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.958397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.958510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.958536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.958759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.958811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.958937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.959024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.959194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.959243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.959400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.959426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.959582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.959622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.959778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.959805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.959966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.960004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.960241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.960265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.960420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.960445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.960636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.960681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.960899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.960949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.961126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.961176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.961390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.961439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.961645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.961676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.961860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.961914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.962120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.962170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.962385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.962433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.962569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.962594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.962765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.962808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.962978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.963027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.963260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.963304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.963532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.963557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.963779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.963825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.964113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.964171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.964329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.964378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.964615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.703 [2024-10-08 18:43:57.964670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.703 qpair failed and we were unable to recover it. 00:33:29.703 [2024-10-08 18:43:57.964932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.964985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.965162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.965209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.965436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.965485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.965704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.965730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.965888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.965940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.966131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.966182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.966359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.966410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.966618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.966642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.966786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.966811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.966992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.967041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.967204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.967252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.967485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.967536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.967691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.967718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.967844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.967900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.968061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.968120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.968334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.968384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.968524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.968549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.968711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.968771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.968976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.969010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.969162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.969214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.969384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.969408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.969590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.969615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.969742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.969810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.969981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.970033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.970145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.970193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.970334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.970373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.970496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.970521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.970640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.970693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.970905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.970949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.971125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.971150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.971328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.971353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.971506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.971530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.971678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.971711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.971878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.971932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.972111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.972159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.704 qpair failed and we were unable to recover it. 00:33:29.704 [2024-10-08 18:43:57.972403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.704 [2024-10-08 18:43:57.972427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.972611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.972635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.972859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.972914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.973164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.973215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.973430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.973480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.973698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.973724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.973866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.973918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.974084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.974133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.974279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.974329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.974508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.974532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.974743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.974800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.975025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.975050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.975294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.975318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.975495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.975520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.975724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.975786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.975971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.976023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.976280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.976331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.976523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.976547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.976795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.976846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.977010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.977059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.977282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.977331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.977568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.977608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.977879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.977931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.978202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.978253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.978444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.978492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.978644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.978700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.978880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.978930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.979106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.979157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.979354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.979397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.979621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.979668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.979854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.979903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.980063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.980115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.980274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.980322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.980487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.980517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.980634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.980686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.980884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.980908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.981109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.981133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.981278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.981326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.981491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.981515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.705 qpair failed and we were unable to recover it. 00:33:29.705 [2024-10-08 18:43:57.981747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.705 [2024-10-08 18:43:57.981772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.981939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.981962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.982168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.982191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.982378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.982429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.982629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.982674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.982924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.982973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.983193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.983242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.983444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.983498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.983669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.983711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.983946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.983998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.984197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.984246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.984403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.984426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.984565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.984604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.984876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.984926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.985117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.985168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.985308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.985361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.985472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.985496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.985735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.985761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.985930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.985981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.986179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.986203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.986429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.986453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.986705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.986731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.986950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.987001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.987185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.987234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.987454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.987505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.987771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.987828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.987939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.988000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.988197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.988247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.988463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.988487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.988699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.988750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.988974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.989033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.989241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.989289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.989534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.989558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.989755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.989809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.990019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.990073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.990232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.990280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.990464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.990488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.990692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.990743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.990941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.990992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.706 [2024-10-08 18:43:57.991188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.706 [2024-10-08 18:43:57.991231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.706 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.991399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.991422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.991604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.991628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.991833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.991884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.992073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.992120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.992303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.992350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.992568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.992591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.992783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.992842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.993042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.993090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.993319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.993369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.993575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.993599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.993798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.993852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.994039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.994090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.994346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.994394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.994633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.994678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.994908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.994959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.995168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.995218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.995429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.995477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.995639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.995679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.995892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.995917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.996030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.996054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.996208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.996259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.996526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.996568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.996750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.996776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.996968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.997018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.997169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.997221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.997418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.997442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.997608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.997646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.997825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.997877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.998062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.998111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.998276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.998326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.998505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.998529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.998780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.998829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.999050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.999099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.999361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.999411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.999511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.999546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.999727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:57.999774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:57.999999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.707 [2024-10-08 18:43:58.000048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.707 qpair failed and we were unable to recover it. 00:33:29.707 [2024-10-08 18:43:58.000213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.000264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.000500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.000523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.000790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.000841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.001083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.001134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.001317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.001365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.001554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.001578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.001720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.001791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.002012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.002061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.002274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.002322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.002544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.002568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.002714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.002754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.002931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.002979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.003186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.003235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.003471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.003495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.003724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.003772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.004002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.004053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.004266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.004315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.004463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.004486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.004647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.004678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.004905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.004963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.005215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.005264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.005452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.005503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.005735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.005759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.005918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.005978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.006140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.006191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.006382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.006431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.006669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.006694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.006896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.006947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.007199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.007249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.007425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.007477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.007625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.007648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.007831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.007891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.008124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.008174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.008372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.008420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.008571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.008595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.008772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.008836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.009021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.009069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.009220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.009275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.009405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.009444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.009664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.708 [2024-10-08 18:43:58.009704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.708 qpair failed and we were unable to recover it. 00:33:29.708 [2024-10-08 18:43:58.009921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.009971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.010175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.010221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.010440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.010492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.010685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.010738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.010874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.010923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.011157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.011206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.011392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.011445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.011657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.011681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.011946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.011970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.012142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.012212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.012410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.012459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.012677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.012703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.012896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.012948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.013133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.013181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.013416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.013465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.013681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.013705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.013875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.013924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.014121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.014174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.014414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.014461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.014702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.014727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.014950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.014993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.015216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.015267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.015472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.015520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.015743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.015768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.015920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.015972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.016227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.016278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.016418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.016468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.016727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.016754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.016933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.016984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.017141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.017198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.017395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.017444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.017611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.017635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.017845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.017897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.018150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.018199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.709 [2024-10-08 18:43:58.018399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.709 [2024-10-08 18:43:58.018450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.709 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.018678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.018702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.018905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.018954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.019050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.019141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.019382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.019433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.019672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.019697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.019955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.020005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.020187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.020238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.020507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.020558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.020708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.020741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.021008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.021056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.021233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.021282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.021437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.021461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.021574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.021599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.021777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.021836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.022089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.022142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.022343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.022392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.022575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.022599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.022791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.022847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.023079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.023131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.023373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.023424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.023552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.023575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.023743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.023785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.024031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.024080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.024275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.024320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.024571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.024596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.024860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.024914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.025113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.025173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.025405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.025452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.025646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.025680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.025812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.025838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.025965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.026014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.026156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.026245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.026409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.026460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.026615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.026641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.026861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.026908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.027128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.027174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.027350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.027397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.027538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.027565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.027739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.710 [2024-10-08 18:43:58.027794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.710 qpair failed and we were unable to recover it. 00:33:29.710 [2024-10-08 18:43:58.027999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.028049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.028281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.028337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.028619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.028645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.028789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.028815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.029059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.029107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.029288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.029336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.029497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.029522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.029746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.029795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.029974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.030019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.030283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.030328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.030471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.030496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.030702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.030729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.030928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.030979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.031158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.031208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.031359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.031385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.031520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.031546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.031694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.031721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.031855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.031901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.032137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.032180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.032414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.032439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.032593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.032633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.032868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.032914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.033042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.033067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.033246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.033291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.033538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.033564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.033713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.033749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.033913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.033959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.034186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.034233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.034384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.034410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.034595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.034621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.034883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.034943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.035151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.035200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.035369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.035418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.035537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.035563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.035722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.035770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.035953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.035989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.036176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.036224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.036375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.036428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.036608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.036633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.036847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.711 [2024-10-08 18:43:58.036907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.711 qpair failed and we were unable to recover it. 00:33:29.711 [2024-10-08 18:43:58.037039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.037085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.037289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.037333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.037535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.037561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.037736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.037784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.038003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.038055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.038290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.038339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.038465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.038508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.038667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.038713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.038900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.038948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.039072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.039120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.039266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.039314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.039513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.039539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.039739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.039765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.039991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.040017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.040162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.040210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.040407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.040432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.040659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.040686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.040922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.040969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.041195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.041237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.041381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.041432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.041677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.041704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.041867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.041894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.042016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.042067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.042268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.042314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.042553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.042578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.042743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.042769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.042941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.042985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.043127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.043172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.043327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.043373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.043533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.043574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.043721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.043776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.043968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.044016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.044234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.044282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.044420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.044445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.044587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.044613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.044841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.044899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.045154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.045202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.045417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.045463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.045687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.045728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.712 [2024-10-08 18:43:58.045876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.712 [2024-10-08 18:43:58.045926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.712 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.046148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.046199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.046396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.046461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.046681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.046721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.046930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.046977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.047082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.047127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.047275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.047322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.047447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.047474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.047599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.047626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.047847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.047896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.048138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.048191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.048415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.048463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.048669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.048696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.048909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.048970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.049206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.049258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.049465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.049520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.049671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.049698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.049876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.049934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.050118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.050164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.050424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.050484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.050584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.050610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.050757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.050824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.051068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.051115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.051226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.051251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.051382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.051408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.051556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.051597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.051793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.051842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.052052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.052101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.052285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.052337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.052474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.052500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.052704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.052751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.052962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.053014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.053204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.053254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.053435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.053462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.053593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.713 [2024-10-08 18:43:58.053619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.713 qpair failed and we were unable to recover it. 00:33:29.713 [2024-10-08 18:43:58.053847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.053901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.054070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.054122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.054261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.054309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.054495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.054521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.054755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.054827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.055084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.055110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.055352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.055411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.055681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.055708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.055881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.055928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.056112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.056164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.056435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.056485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.056636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.056696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.056925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.056983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.057145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.057192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.057429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.057480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.057658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.057684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.057841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.057867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.058050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.058097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.058256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.058305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.058454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.058505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.058623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.058681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.058805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.058852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.058996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.059021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.059214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.059264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.059391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.059417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.059576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.059617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.059806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.059833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.060002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.060046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.060241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.060297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.060522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.060549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.060781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.060829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.061049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.061096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.061285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.061320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.061513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.061539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.061748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.061795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.061986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.062035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.062272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.062324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.062456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.062483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.062616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.062659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.714 qpair failed and we were unable to recover it. 00:33:29.714 [2024-10-08 18:43:58.062817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.714 [2024-10-08 18:43:58.062844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.063027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.063054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.063240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.063266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.063497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.063523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.063744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.063796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.064000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.064046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.064273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.064320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.064478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.064505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.064682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.064730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.064925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.064973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.065186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.065235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.065435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.065461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.065684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.065711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.065903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.065952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.066160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.066206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.066450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.066505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.066692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.066743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.066893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.066942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.067178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.067224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.067439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.067485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.067726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.067753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.067957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.068007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.068244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.068292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.068496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.068548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.068817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.068866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.069102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.069149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.069353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.069389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.069567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.069594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.069752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.069779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.070004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.070053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.070233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.070281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.070451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.070477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.070640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.070673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.070912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.070961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.071190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.071238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.071449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.071504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.071694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.071720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.071828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.071882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.072048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.072095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.715 [2024-10-08 18:43:58.072298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.715 [2024-10-08 18:43:58.072343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.715 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.072558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.072585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.072771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.072820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.072980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.073028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.073171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.073221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.073441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.073489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.073697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.073723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.073906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.073956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.074114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.074160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.074412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.074479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.074742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.074769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.075028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.075077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.075271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.075317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.075550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.075596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.075758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.075786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.075927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.075980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.076162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.076205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.076383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.076435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.076610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.076636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.076786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.076834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.077002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.077070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.077297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.077323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.077493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.077519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.077706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.077743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.077902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.077946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.078147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.078195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.078334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.078387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.078530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.078557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.078804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.078830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.079066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.079091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.079345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.079394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.079586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.079612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.079841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.079888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.080090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.080135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.080254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.080346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.080541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.080566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.080765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.080811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.081057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.081102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.081247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.081299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.081443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.081470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.081630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.716 [2024-10-08 18:43:58.081670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.716 qpair failed and we were unable to recover it. 00:33:29.716 [2024-10-08 18:43:58.081909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.081959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.082221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.082257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.082393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.082441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.082599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.082626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.082740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.082766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.082907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.082962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.083160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.083211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.083422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.083473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.083713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.083741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.083874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.083918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.084082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.084129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.084304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.084355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.084541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.084567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.084742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.084790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.084977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.085025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.085200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.085244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.085400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.085447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.085679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.085706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.085883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.085935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.086131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.086178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.086361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.086409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.086539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.086566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.086723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.086771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.087007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.087056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.087302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.087361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.087486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.087526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.087683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.087710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.087842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.087888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.088098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.088150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.088349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.088403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.088605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.088630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.088741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.088802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.089001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.089051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.089233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.089274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.089455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.089489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.089622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.089647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.089846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.089896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.090100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.090156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.090339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.090387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.717 qpair failed and we were unable to recover it. 00:33:29.717 [2024-10-08 18:43:58.090537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.717 [2024-10-08 18:43:58.090562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.090748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.090798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.090980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.091026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.091215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.091263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.091359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.091384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.091500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.091526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.091735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.091761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.091912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.091938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.092089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.092114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.092264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.092289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.092515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.092540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.092629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.092661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.092831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.092857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.093061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.093111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.093284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.093333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.093518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.093543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.093680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.093707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.093848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.093903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.094072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.094121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.094353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.094403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.094616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.094642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.094826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.094878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.095017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.095071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.095194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.095250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.095371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.095396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.095519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.095545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.095665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.095691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.095845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.095897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.096047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.096093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.096339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.096387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.096578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.096604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.096844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.096890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.097131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.097178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.097392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.097443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.097588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.097612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.718 qpair failed and we were unable to recover it. 00:33:29.718 [2024-10-08 18:43:58.097798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.718 [2024-10-08 18:43:58.097866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.098022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.098064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.098275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.098324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.098463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.098491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.098636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.098704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.098823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.098875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.099166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.099189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.099303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.099327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.099445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.099470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.099677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.099702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.099893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.099941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.100186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.100210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.100418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.100465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.100661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.100686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.100884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.100909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.101107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.101157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.101354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.101403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.101618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.101642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.101839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.101890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.102162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.102210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.102361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.102409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.102561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.102585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.102698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.102724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.102839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.102896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.103054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.103106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.103275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.103324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.103511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.103535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.103711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.103763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.103967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.104018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.104176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.104225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.104449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.104473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.104656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.104681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.104931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.104988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.105144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.105195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.105407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.105455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.105679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.105704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.105896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.105945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.106159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.106207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.106435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.106485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.106633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.719 [2024-10-08 18:43:58.106679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.719 qpair failed and we were unable to recover it. 00:33:29.719 [2024-10-08 18:43:58.106813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.106853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.107119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.107172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.107388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.107432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.107613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.107641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.107766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.107799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.107977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.108031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.108177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.108229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.108406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.108455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.108625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.108674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.108812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.108855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.109036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.109089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.109286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.109332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.109483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.109507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.109720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.109783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.109974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.110023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.110278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.110326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.110504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.110528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.110732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.110781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.110977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.111026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.111207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.111276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.111450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.111473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.111706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.111730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.111978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.112028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.112248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.112300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.112513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.112537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.112749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.112795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.112977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.113030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.113287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.113339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.113555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.113579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.113808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.113860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.114049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.114113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.114313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.114357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.114494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.114518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.114663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.114688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.114939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.114994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.115269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.115319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.115419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.115443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.115653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.115679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.115829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.115879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.116051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.720 [2024-10-08 18:43:58.116103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.720 qpair failed and we were unable to recover it. 00:33:29.720 [2024-10-08 18:43:58.116254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.116304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.116504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.116527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.116769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.116812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.116964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.117007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.117198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.117222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.117369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.117393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.117586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.117610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.117855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.117907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.118055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.118105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.118234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.118295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.118510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.118534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.118748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.118815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.119022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.119074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.119248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.119296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.119471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.119494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.119700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.119725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.119910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.119958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.120132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.120182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.120322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.120346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.120508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.120546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.120702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.120742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.120946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.121007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.121257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.121304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.121460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.121484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.121671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.121711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.121913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.121961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.122145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.122196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.122394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.122444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.122594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.122618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.122765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.122851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.123028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.123079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.123347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.123398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.123575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.123598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.123820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.123872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.124110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.124160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.124360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.124407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.124606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.124630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.124800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.124825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.124994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.125044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.125289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.721 [2024-10-08 18:43:58.125338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.721 qpair failed and we were unable to recover it. 00:33:29.721 [2024-10-08 18:43:58.125527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.125551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.125819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.125870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.126046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.126095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.126246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.126301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.126459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.126483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.126624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.126670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.126811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.126870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.127108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.127159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.127321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.127370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.127542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.127566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.127768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.127816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.127987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.128038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.128281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.128330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.128545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.128569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.128761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.128818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.129073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.129123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.129271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.129303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.129479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.129504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.129708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.129733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.129990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.130038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.130140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.130192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.130387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.130411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.130557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.130581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.130797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.130847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.131043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.131095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.131340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.131384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.131529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.131553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.131808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.131859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.132033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.132083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.132285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.132329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.132524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.132548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.132723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.132789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.133049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.133097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.133330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.133384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.722 [2024-10-08 18:43:58.133587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.722 [2024-10-08 18:43:58.133611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.722 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.133828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.133878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.134068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.134120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.134359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.134408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.134623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.134646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.134887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.134912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.135122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.135163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.135310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.135334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.135529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.135553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.135770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.135824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.136071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.136122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.136267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.136291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.136397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.136422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.136563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.136588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.136788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.136821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.137017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.137066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.137314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.137362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.137613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.137637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.137851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.137902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.138112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.138163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.138318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.138372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.138579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.138602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.138764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.138806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.139003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.139048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.139306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.139357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.139589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.139613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.139843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.139869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.140026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.140095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.140332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.140380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.140545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.140569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.140805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.140857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.141108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.141174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.141382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.141432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.141574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.141598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.141796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.141848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.142107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.142157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.142342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.142384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.142591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.142614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.142794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.142819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.143001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.723 [2024-10-08 18:43:58.143050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.723 qpair failed and we were unable to recover it. 00:33:29.723 [2024-10-08 18:43:58.143197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.143248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.143432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.143481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.143624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.143648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.143858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.143910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.144150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.144201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.144401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.144449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.144664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.144689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.144936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.144975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.145129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.145179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.145372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.145422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.145691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.145715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.145967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.146017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.146245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.146295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.146514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.146564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.146808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.146832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.147031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.147083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.147299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.147349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.147474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.147497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.147709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.147742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.147998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.148048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.148262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.148312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.148484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.148508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.148721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.148788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.149004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.149055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.149243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.149287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.149487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.149512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.149740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.149794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.149997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.150052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.150276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.150327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.150496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.150520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.150662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.150688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.150847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.150898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.151089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.151138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.151379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.151432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.151586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.151610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.151831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.151886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.152120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.152173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.152375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.152426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.152658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.152683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.152884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.152929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.724 qpair failed and we were unable to recover it. 00:33:29.724 [2024-10-08 18:43:58.153135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.724 [2024-10-08 18:43:58.153185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.153376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.153422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.153683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.153707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.153894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.153955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.154077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.154133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.154360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.154410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.154553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.154577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.154684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.154710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.154940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.154991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.155188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.155235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.155479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.155530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.155767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.155816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.156072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.156123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.156315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.156362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.156610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.156660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.156918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.156978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.157201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.157251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.157537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.157584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.157827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.157852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.158112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.158160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.158304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.158354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.158587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.158611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.158784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.158810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.158963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.159011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.159151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.159203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.159438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.159486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.159712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.159737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.159908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.159957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.160188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.160238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.160479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.160527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.160737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.160762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.160984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.161035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.161228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.161279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.161484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.161508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.161677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.161745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.161967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.162018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.162246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.162303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.162481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.162505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.162660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.162684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.162886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.725 [2024-10-08 18:43:58.162937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.725 qpair failed and we were unable to recover it. 00:33:29.725 [2024-10-08 18:43:58.163114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.163165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.163422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.163472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.163632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.163676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.163883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.163932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.164191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.164244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.164441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.164489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.164738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.164808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.165024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.165075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.165239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.165290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.165437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.165461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.165573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.165598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.165784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.165835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.166033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.166082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.166281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.166340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.166438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.166462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.166601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.166626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.166831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.166881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.167077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.167128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.167285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.167371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.167563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.167587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.167796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.167848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.168047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.168096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.168239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.168263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.168411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.168435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.168623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.168648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.168811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.168881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.169129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.169180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.169410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.169460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.169676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.169701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.169956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.170005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.170176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.170225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.170397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.170446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.170629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.170684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.170842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.170865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.171009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.726 [2024-10-08 18:43:58.171062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.726 qpair failed and we were unable to recover it. 00:33:29.726 [2024-10-08 18:43:58.171228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.171280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.171427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.171455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.171592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.171616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.171804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.171829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.171947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.171972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.172156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.172180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.172386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.172410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.172654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.172681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.172861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.172887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.173154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.173202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.173404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.173454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.173696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.173721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.173849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.173874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.174052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.174103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.174258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.174309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.174486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.174511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.174647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.174692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.174888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.174936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.175074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.175128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.175340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.175391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.175539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.175563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.175710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.175750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.175898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.175954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.176111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.176162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.176311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.176360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.176489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.176528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.176675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.176701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.176867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.176908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.177084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.177108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.177291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.177339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.177569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.177593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.177756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.177807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.177914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.177979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.178204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.178255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.178471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.178495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.178677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.178730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.178861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.178911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.179052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.179105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.179290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.179337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.179560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.727 [2024-10-08 18:43:58.179584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.727 qpair failed and we were unable to recover it. 00:33:29.727 [2024-10-08 18:43:58.179741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.179792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.179944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.179997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.180137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.180192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.180334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.180358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.180519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.180557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.180699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.180751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.180887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.180943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.181112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.181136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.181271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.181310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.181438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.181463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.181608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.181633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.181784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.181809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.181941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.181965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.182131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.182155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.182280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.182305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.182447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.182471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.182613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.182637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.182776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.182801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.182954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.182979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.183131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.183155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.183282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.183307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.183480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.183518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.183625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.183658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.183798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.183824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.183972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.184011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.184174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.184197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.184355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.184394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.184563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.184586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.184709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.184735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.184921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.184973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.185124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.185177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.185336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.185361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.185537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.185560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.185723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.185781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.185916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.185975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.186126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.186176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.186338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.186361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.186556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.186580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.728 [2024-10-08 18:43:58.186758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.728 [2024-10-08 18:43:58.186824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.728 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.187017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.187068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.187259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.187307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.187495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.187522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.187768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.187815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.188015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.188065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.188254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.188301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.188559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.188583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.188733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.188820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.189017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.189068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.189204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.189259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.189511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.189536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.189722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.189747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.189865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.189921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.190037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.190077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.190263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.190287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.190489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.190514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.190720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.190746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.190840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.190880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.191004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.191028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.191213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.191237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.191379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.191403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.191596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.191620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.191757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.191817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.192017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.192040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.192250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.192300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.192436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.192459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.192621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.192646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.192779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.192837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.193044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.193092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.193280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.193331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.193487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.193512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.193724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.193775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.193900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.193969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.194066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.194131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.194270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.194295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.194425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.194450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.194606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.194644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.194816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.194868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.195020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.729 [2024-10-08 18:43:58.195070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.729 qpair failed and we were unable to recover it. 00:33:29.729 [2024-10-08 18:43:58.195235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.195286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.195434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.195458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.195610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.195656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.195756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.195785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.195917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.195957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.196054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.196079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.196223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.196248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.196397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.196422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.196597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.196621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.196788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.196813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.197003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.197042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.197262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.197286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.197420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.197453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.197608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.197649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.197838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.197863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.197984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.198042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.198238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.198287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.198465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.198489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.198685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.198728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.198849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.198873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.199026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.199051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.199225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.199249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.199392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.199417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.199538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.199562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.199729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.199756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.199851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.199876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.200079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.200103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.200245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.200269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.200433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.200471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.200648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.200693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.200846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.200897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.201065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.201116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.201368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.201415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.201596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.201620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.201825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.201877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.202102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.202151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.202293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.202343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.202560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.202585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.202768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.202793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.202912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.730 [2024-10-08 18:43:58.202967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.730 qpair failed and we were unable to recover it. 00:33:29.730 [2024-10-08 18:43:58.203171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.203223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.203474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.203525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.203676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.203701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.203887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.203948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.204176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.204227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.204434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.204458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.204598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.204622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.204816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.204873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.205049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.205100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.205319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.205371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.205562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.205586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.205715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.205756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.205946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.205996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.206166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.206217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.206370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.206422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.206602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.206626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.206766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.206807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.206987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.207034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.207173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.207225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.207369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.207409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.207515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.207539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.207663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.207689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.207800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.207826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.207998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.208037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.208145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.208184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.208306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.208331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.208502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.208551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.208700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.208727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.208858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.208884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.209030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.209069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.209223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.209248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.209448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.209471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.209634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.209674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.209821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.731 [2024-10-08 18:43:58.209871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.731 qpair failed and we were unable to recover it. 00:33:29.731 [2024-10-08 18:43:58.210090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.210115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.210327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.210377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.210529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.210554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.210696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.210721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.210856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.210927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.211077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.211165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.211312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.211375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.211564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.211588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.211732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.211787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.211905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.211934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.212091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.212116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.212270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.212296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.212461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.212486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.212623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.212659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.212770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.212796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.212999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.213047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.213173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.213225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.213406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.213430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.213572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.213598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.213728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.213755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.213876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.213903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:29.732 [2024-10-08 18:43:58.214038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.732 [2024-10-08 18:43:58.214063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:29.732 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.214238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.214265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.214427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.214453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.214574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.214600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.214737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.214764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.214927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.214952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.215097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.215120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.215247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.215275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.215452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.215478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.215631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.215682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.215775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.215800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.215953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.215977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.216131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.216178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.216314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.216339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.216452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.216477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.216623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.216687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.216834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.216862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-10-08 18:43:58.216972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-10-08 18:43:58.216999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.217101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.217127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.217253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.217278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.217429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.217455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.217644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.217699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.217836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.217861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.218037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.218087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.218311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.218358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.218590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.218616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.218773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.218799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.218918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.218971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.219103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.219165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.219311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.219363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.219536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.219560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.219767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.219823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.220009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.220034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.220211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.220235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.220383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.220407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.220557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.220597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.220797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.220900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.221211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.221280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.221536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.221603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.221824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.221892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.222173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.222238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.222529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.222595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.222881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.222907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.223236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.223283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.223443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.223494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.223649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.223706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.223884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.223925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.224153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.224203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.224445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.224495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.224711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.224739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.224896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.224951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.225152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.225199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.225359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.225410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.225558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.225581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.225711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-10-08 18:43:58.225751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-10-08 18:43:58.225873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.225935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.226223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.226273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.226451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.226475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.226585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.226610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.226746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.226799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.226917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.226983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.227126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.227151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.227266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.227290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.227443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.227468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.227601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.227626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.227748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.227774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.227866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.227892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.228008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.228034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.228154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.228183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.228342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.228381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.228517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.228541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.228693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.228719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.228816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.228842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.228992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.229031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.229187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.229212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.229348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.229372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.229509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.229534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.229675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.229701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.229816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.229842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.229955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.229979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.230106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.230131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.230247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.230272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.230414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.230439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.230571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.230597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.230750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.230777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.230878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.230903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.231032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.231056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.231230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.231254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.231394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.231420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.231527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.231553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.231690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.231717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.231844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.231869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.231968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.231992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-10-08 18:43:58.232186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-10-08 18:43:58.232211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.232356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.232380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.232570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.232596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.232732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.232758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.232899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.232925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.233087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.233117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.233273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.233299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.233483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.233525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.233634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.233674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.233787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.233814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.233954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.233979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.234129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.234168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.234299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.234337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.234496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.234521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.234669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.234696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.234795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.234826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.234920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.234960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.235081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.235105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.235238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.235264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.235399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.235424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.235578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.235618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.235770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.235815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.235977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.236008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.236269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.236298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.236460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.236486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.236592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.236618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.236748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.236774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.236879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.236919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.237170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.237235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.237526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.237592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.237759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.237786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.237931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.237975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.238139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.238183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.238326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.238369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.238539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.238580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.238723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.238766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.238885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.238914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.239076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.239128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.239270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-10-08 18:43:58.239299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-10-08 18:43:58.239540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.239565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.239728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.239773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.239863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.239889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.240137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.240163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.240330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.240356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.240514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.240540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.240711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.240742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.240871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.240900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.241122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.241165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.241300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.241344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.241508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.241533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.241678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.241705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.241828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.241854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.242075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.242098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.242271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.242322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.242503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.242528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.242732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.242786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.242913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.242952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.243077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.243130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.243264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.243302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.243498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.243522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.243715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.243742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.243842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.243868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.244007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.244032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.244195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.244219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.244355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.244379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.244530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.244569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.244683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.244710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.244816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.244845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.245052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.245104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.245243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.245268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.245437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.245462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.245618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.245643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.245778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.245807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.245956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.245985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.246220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.246269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.246427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.246451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-10-08 18:43:58.246674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-10-08 18:43:58.246701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.246821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.246847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.246960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.246985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.247128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.247166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.247371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.247394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.247485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.247509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.247730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.247761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.247886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.247912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.248053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.248078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.248223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.248248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.248363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.248387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.248534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.248559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.248751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.248778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.248878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.248904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.249092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.249117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.249280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.249304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.249468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.249491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.249672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.249734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.249877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.249921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.250114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.250163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.250308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.250333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.250495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.250520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.250742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.250769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.250867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.250893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.251009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.251034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.251158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.251183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.251333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.251372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.251509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.251543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.251703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.251730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.251858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.251884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.251999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.252039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.252194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.252218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.252413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.252437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.252559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.252585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.252704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-10-08 18:43:58.252731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-10-08 18:43:58.252834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.252860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.252998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.253023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.253192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.253216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.253421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.253456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.253723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.253752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.253939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.253984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.254157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.254208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.254425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.254473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.254598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.254623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.254769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.254798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.254922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.254950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.255156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.255211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.255307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.255333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.255468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.255492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.255625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.255696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.255852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.255879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.255993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.256032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.256191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.256215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.256405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.256429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.256592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.256616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.256766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.256810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.256908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.256951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.257055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.257080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.257241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.257292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.257472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.257495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.257646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.257694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.257853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.257898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.258049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.258104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.258266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.258290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.258431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.258469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.258682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.258717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.258822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.258851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.259033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.259076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.259319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.259367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.259556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.259580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.259728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.259774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.259899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.259928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.260064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-10-08 18:43:58.260124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-10-08 18:43:58.260241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.260293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.260390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.260415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.260587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.260612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.260732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.260758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.260896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.260922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.261068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.261093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.261308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.261333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.261468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.261508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.261713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.261739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.261860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.261886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.262036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.262060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.262196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.262221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.262476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.262500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.262727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.262772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.262909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.262934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.263118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.263141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.263295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.263319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.263430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.263454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.263576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.263601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.263756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.263782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.263877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.263904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.264045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.264070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.264187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.264211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.264308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.264333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.264498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.264522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.264632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.264664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.264776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.264802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.264965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.265005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.265128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.265167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.265326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.265351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.265503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.265541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.265692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.265719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.265842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.265869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.266124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.266172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.266317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.266341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.266480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.266519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.266627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.266689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.266795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.266824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-10-08 18:43:58.267022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-10-08 18:43:58.267061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.267303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.267351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.267589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.267613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.267760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.267804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.268019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.268069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.268208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.268232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.268407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.268432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.268602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.268626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.268802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.268832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.268970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.269017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.269166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.269190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.269334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.269358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.269520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.269565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.269759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.269804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.269923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.269963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.270148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.270175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.270287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.270312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.270425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.270459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.270734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.270780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.270892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.270928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.271075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.271099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.271230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.271255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.271363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.271388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.271602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.271628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.271764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.271790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.272012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.272036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.272169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.272197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.272347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.272386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.272556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.272581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.272728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.272755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.272863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.272892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.273075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.273124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.273287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.273336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.273449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.273488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.273621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.273670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.273759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.273785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.273877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.273903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.274048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.274073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.274233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-10-08 18:43:58.274259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-10-08 18:43:58.274408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.274434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.274563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.274603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.274725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.274766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.274916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.274943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.275106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.275133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.275257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.275298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.275434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.275461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.275593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.275619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.275743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.275770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.275863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.275889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.276112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.276138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.276272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.276298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.276424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.276449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.276623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.276657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.276767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.276793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.276901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.276928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.277080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.277126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.277316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.277343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.277579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.277605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.277771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.277819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.277981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.278045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.278198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.278244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.278331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.278357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.278595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.278636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.278806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.278854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.279002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.279049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.279209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.279241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.279381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.279406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.279537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.279564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.279708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.279759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.279884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.279937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.280139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.280185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.280321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-10-08 18:43:58.280347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-10-08 18:43:58.280476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.280513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.280743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.280795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.280945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.281013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.281203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.281249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.281466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.281492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.281627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.281674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.281846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.281872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.282002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.282028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.282187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.282214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.282342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.282368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.282467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.282494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.282738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.282788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.282899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.282925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.283096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.283137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.283285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.283310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.283413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.283440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.283548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.283574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.283701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.283728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.283851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.283877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.284001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.284027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.284207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.284233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.284380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.284421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.284585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.284612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.284760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.284816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.284941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.285000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.285151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.285175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.285313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.285361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.285494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.285534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.285673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.285700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.285821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.285847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.286012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.286038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.286228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.286254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.286347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.286388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.286635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.286669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.286778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.286826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.286983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.287031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.287167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.287203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.287333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-10-08 18:43:58.287360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-10-08 18:43:58.287494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.287520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.287708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.287750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.287907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.287944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.288175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.288201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.288374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.288400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.288568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.288594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.288764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.288812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.288959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.289010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.289133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.289180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.289264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.289291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.289507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.289533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.289722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.289770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.289962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.290009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.290202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.290250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.290425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.290450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.290577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.290603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.290729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.290778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.290900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.290951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.291091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.291116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.291230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.291257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.291405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.291430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.291550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.291576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.291693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.291721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.291832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.291867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.292028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.292075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.292200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.292245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.292430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.292456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.292572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.292598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.292724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.292752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.292866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.292929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.293075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.293133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.293307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.293334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.293503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.293529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.293673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.293706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.293820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.293870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.294018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.294066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.294180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.294206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.294303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.294330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.294497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.294523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-10-08 18:43:58.294648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-10-08 18:43:58.294681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.294797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.294823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.294944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.294970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.295097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.295138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.295277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.295302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.295431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.295462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.295601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.295626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.295787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.295814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.295916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.295942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.296032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.296058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.296148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.296174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.296293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.296319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.296421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.296447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.296578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.296604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.296751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.296778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.296917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.296943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.297071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.297097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.297239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.297265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.297395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.297421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.297575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.297601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.297731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.297758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.297875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.297901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.298029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.298055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.298158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.298183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.298340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.298366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.298496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.298523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.298642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.298689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.298783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.298810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.298947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.298983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.299114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.299140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.299309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.299335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.299489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.299515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.299658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.299688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.299776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.299803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.299951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.299976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.300080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.300139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.300304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-10-08 18:43:58.300330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-10-08 18:43:58.300445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.300471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.300609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.300635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.300793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.300829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.301026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.301052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.301198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.301242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.301366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.301392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.301595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.301621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.301732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.301759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.301859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.301885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.302042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.302068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.302300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.302356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.302476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.302507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.302725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.302774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.302879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.302932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.303131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.303157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.303309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.303335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.303464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.303491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.303601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.303627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.303743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.303770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.303895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.303921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.304113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.304139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.304320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.304347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.304575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.304601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.304720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.304756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.304885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.304939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.305085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.305129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.305349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.305396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.305592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.305618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.305766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.305825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.306076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.306127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.306282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.306326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.306457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.306483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.306631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.306667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.306803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.306851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.306976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.307025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.307146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.307172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.307341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.307367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.307557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.307598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.307735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.307762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.307860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.307886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.308011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-10-08 18:43:58.308037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-10-08 18:43:58.308180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.308206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.308373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.308398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.308557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.308583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.308715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.308757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.308891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.308941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.309071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.309117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.309228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.309291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.309435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.309461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.309633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.309668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.309811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.309857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.310050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.310097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.310247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.310274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.310491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.310518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.310623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.310656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.310805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.310852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.311050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.311077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.311286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.311344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.311517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.311544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.311736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.311791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.311945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.312000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.312108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.312155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.312281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.312338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.312508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.312534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.312671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.312701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.312820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.312846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.312996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.313023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.313196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.313255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.313385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.313411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.313539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.313569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.313745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.313795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.313940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.313987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.314090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.314132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.314269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.314295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.314490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.314516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.314641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.314674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.314798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.314825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.314986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.315012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.315159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-10-08 18:43:58.315185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-10-08 18:43:58.315320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.315346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.315461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.315487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.315631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.315666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.315798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.315825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.316000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.316026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.316132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.316158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.316373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.316400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.316501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.316527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.316675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.316702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.316829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.316855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.316979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.317005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.317104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.317130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.317260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.317286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.317414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.317439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.317567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.317594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.317703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.317756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.317920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.317969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.318104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.318152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.318373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.318399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.318579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.318605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.318749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.318796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.318908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.318956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.319106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.319157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.319352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.319379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.319496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.319522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.319662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.319694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.319802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.319853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.319964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.319990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.320126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.320153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.320295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.320329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.320535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.320568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.320735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.320762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.320890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.320917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.321106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.321132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.321362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.321389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.321546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.321572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.321743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.321792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.321896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.321932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.322104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.322148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.322288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.322336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.322455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.322481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.322604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.322631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.322769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-10-08 18:43:58.322818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-10-08 18:43:58.322945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.322970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.323202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.323229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.323406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.323432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.323558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.323584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.323701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.323728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.323844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.323891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.324041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.324075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.324191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.324227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.324372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.324398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.324492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.324518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.324617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.324643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.324777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.324803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.324901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.324928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.325121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.325148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.325324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.325350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.325540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.325567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.325739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.325788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.325973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.326032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.326179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.326230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.326321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.326348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.326540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.326566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.326656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.326682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.326810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.326860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.326997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.327023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.327193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.327219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.327384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.327411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.327578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.327604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.327751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.327803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.327919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.327967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.328098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.328133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.328299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.328329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.328510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.328536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.328729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.328756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.328850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.328876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.329004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.329030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.329176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.329202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.329340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.329366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.329542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.329568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.329722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.329761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.329857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.329883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.329980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.330016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.330182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-10-08 18:43:58.330208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-10-08 18:43:58.330394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.330424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.330644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.330677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.330818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.330890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.331024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.331069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.331219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.331266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.331415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.331440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.331665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.331692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.331800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.331847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.332056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.332109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.332254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.332303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.332409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.332435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.332630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.332663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.332808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.332857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.332987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.333034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.333161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.333207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.333378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.333405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.333531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.333557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.333787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.333814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.333947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.333994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.334143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.334169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.334300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.334326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.334549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.334576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.334712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.334764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.334864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.334918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.335036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.335072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.335234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.335275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.335468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.335501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.335605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.335668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.335795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.335821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.335945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.335971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.336122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.336149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.336269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.336295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.336413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.336439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.336527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.336554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.336681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.336708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.336803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.336830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.336958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.336984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.337176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.337203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.337422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.337449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.337640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.337676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.337797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-10-08 18:43:58.337844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-10-08 18:43:58.337978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.338041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.338287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.338336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.338556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.338582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.338730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.338778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.338894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.338943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.339103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.339150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.339330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.339374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.339520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.339546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.339734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.339780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.339935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.339962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.340122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.340148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.340294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.340341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.340481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.340507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.340647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.340680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.340801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.340827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.341043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.341069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.341249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.341277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.341437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.341463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.341593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.341619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.341730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.341756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.341887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.341914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.342151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.342177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.342369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.342421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.342648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.342682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.342796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.342848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.342998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.343047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.343209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.343257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.343414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.343461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.343584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.343610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.343750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.343801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.343951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.343978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.344215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.344240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.344406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.344432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.344596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.344622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.344769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.344816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.344910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.344936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.345057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.345110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-10-08 18:43:58.345283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-10-08 18:43:58.345336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.345435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.345462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.345547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.345573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.345726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.345753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.345846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.345872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.345967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.345993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.346167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.346193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.346312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.346339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.346501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.346527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.346714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.346741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.346889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.346915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.347140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.347166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.347391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.347417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.347558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.347585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.347755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.347807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.347930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.347980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.348148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.348195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.348346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.348381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.348540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.348575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.348782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.348830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.349074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.349135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.349289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.349339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.349541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.349566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.349739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.349792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.349916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.349967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.350110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.350153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.350290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.350337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.350509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.350536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.350745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.350773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.350879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.350905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.351017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.351044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.351179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.351205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.351374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.351400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.351507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.351534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.351709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.351735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.351885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.351922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.352054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.352081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.352308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.352334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.352577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.352604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.352812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.352862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.353003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.353045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.353209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.353258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.353415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.353442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.353621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.353647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.353778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.353826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.354027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-10-08 18:43:58.354076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-10-08 18:43:58.354251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.354303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.354464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.354491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.354723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.354751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.354919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.354964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.355132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.355184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.355375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.355422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.355585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.355611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.355747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.355796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.356001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.356063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.356313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.356360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.356487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.356513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.356738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.356785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.356908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.356959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.357086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.357112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.357235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.357260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.357392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.357418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.357520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.357546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.357704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.357730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.357825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.357851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.358054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.358080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.358227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.358253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.358379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.358405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.358666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.358694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.358840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.358866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.359053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.359106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.359270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.359317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.359459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.359485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.359648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.359682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.359808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.359856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.360058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.360106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.360285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.360334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.360494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.360520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.360743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.360792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.360917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.360943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.361098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.361124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.361356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.361383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.361586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.361612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.361771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.361819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.362011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.362037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.362241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.362292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.362472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.362498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.362677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.362724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.362849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.362898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.363130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.363176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.363424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.363471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.363671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-10-08 18:43:58.363698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-10-08 18:43:58.363823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.363874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.364037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.364083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.364322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.364377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.364607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.364633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.364806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.364856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.365003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.365052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.365264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.365311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.365539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.365565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.365782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.365834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.365973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.366022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.366195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.366244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.366431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.366475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.366641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.366674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.366833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.366884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.367060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.367111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.367301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.367353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.367541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.367567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.367753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.367805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.367956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.368003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.368196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.368246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.368391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.368433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.368679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.368707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.368848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.368901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.369126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.369176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.369358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.369406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.369569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.369594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.369722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.369749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.369899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.369959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.370101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.370152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.370328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.370379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.370539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.370569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.370735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.370787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.371024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.371072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.371259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.371310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.371432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.371458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.371575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.371600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.371812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.371861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.372047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.372097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.372318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.372366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.372545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.372570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.372782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.372834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.373020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.373063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.373297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.373349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.373579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.373604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-10-08 18:43:58.373748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-10-08 18:43:58.373775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.373947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.373994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.374188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.374234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.374421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.374464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.374671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.374698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.374845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.374894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.375011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.375069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.375265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.375312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.375547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.375572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.375751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.375803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.376049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.376100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.376322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.376371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.376596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.376622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.376863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.376917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.377113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.377163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.377324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.377373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.377590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.377614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.377796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.377821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.377958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.378010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.378139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.378194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.378374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.378422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.378614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.378660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.378795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.378853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.379096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.379120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.379385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.379432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.379591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.379616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.379825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.379852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.380043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.380096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.380310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.380359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.380554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.380577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.380792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.380843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.380981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.381035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.381218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.381270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.381478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.381502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.381699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.381724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.381951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.382005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.382192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.382242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.382496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.382520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.382790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.382847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.383101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.383151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.383322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.383370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.383578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.383602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.383816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.383867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.384033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.384086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.384317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.384366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-10-08 18:43:58.384598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-10-08 18:43:58.384621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.384900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.384962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.385232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.385280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.385580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.385626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.385899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.385924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.386185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.386235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.386492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.386541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.386769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.386794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.387014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.387064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.387336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.387383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.387599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.387623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.387823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.387849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.388028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.388077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.388220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.388270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.388459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.388510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.388669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.388701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.388900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.388950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.389192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.389239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.389488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.389537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.389786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.389833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.390026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.390076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.390234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.390284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.390518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.390543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.390724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.390778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.390935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.390986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.391219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.391270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.391442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.391465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.391657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.391695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.391899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.391954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.392160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.392209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.392416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.392463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.392729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.392788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.393039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.393088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.393329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.393381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.393586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.393610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.393860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.393886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.394095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.394143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.394376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.394426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.394593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.394617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.394769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.394807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.394981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.395032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.395238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.395286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.395440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.395491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.395684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.395735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.395949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.396008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.396222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.396273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.396415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-10-08 18:43:58.396439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-10-08 18:43:58.396689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.396714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.396917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.396967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.397143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.397187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.397434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.397483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.397713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.397737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.397892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.397944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.398105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.398156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.398359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.398406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.398676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.398701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.398977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.399001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.399180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.399229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.399463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.399508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.399777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.399832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.400042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.400091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.400325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.400374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.400543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.400567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.400673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.400697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.400942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.400986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.401209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.401259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.401523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.401573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.401763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.401787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.401966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.402012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.402210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.402259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.402434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.402482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.402711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.402736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.402984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.403033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.403267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.403319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.403538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.403562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.403758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.403784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.404032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.404080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.404321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.404372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.404506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.404530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.404772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.404821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.405043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.405093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.405298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.405348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.405580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.405604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.405802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.405856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.406101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.406151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.406330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.406378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.406555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.406580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.406789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.406840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.407080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.407130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.407373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.407424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.407546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.407570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.407768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.407822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.407964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-10-08 18:43:58.408013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-10-08 18:43:58.408160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.408209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.408370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.408418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.408544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.408582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.408762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.408832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.408999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.409040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.409261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.409311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.409479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.409503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.409607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.409632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.409849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.409900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.410138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.410187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.410376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.410418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.410564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.410588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.410779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.410836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.411096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.411143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.411370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.411420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.411670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.411695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.411950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.412003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.412171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.412222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.412389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.412437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.412632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.412687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.412932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.412995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.413193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.413236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.413427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.413477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.413613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.413637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.413864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.413914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.414139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.414189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.414403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.414453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.414658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.414683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.414901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.414941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.415105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.415155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.415330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-10-08 18:43:58.415380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-10-08 18:43:58.415606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.415630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.415811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.415838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.416008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.416060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.416214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.416264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.416414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.416437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.416576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.416601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.416763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.416789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.416929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.416955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.417147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.417171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.417338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.417378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.417501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.417540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.417721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.417761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.417931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.417963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.418171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.418195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.418404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.418428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.418627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.418657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.418886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.418932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.419139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.419185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.419400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.419445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.419658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.419684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.419977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.420036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.420263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.420312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.420488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.420540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.420770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.420797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.420982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.421041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.421266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.421312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.421447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.421490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.421711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.421736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.421946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.421994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.422247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.422304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.422521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.422545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.422855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.422907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.423145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.423196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.423432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.423479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.423706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-10-08 18:43:58.423732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-10-08 18:43:58.423848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.423873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.424035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.424088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.424239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.424290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.424464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.424488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.424682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.424724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.424976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.425024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.425206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.425257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.425426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.425450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.425603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.425628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.425842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.425895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.426107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.426154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.426394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.426444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.426568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.426593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.426733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.426759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.426965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.427015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.427245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.427293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.427441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.427465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.427622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.427647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.427898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.427947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.428171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.428222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.428418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.428461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.428718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.428743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.428875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.428926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.429161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.429208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.429462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.429511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.429714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.429739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.429967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.430020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.430164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.430188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.430407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.430459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.430633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.430677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.430858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.430909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.431151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.431201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.431401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.431459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.431644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.431688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.431846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.431875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.432076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.432120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.432451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-10-08 18:43:58.432514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-10-08 18:43:58.432613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.432657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.432816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.432868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.433050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.433092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.433363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.433407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.433550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.433574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.433716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.433785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.433993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.434047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.434222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.434274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.434465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.434489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.434617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.434684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.434926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.434970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.435174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.435224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.435424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.435472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.435676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.435701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.435875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.435925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.436140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.436191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.436418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.436469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.436562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.436601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.436746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.436773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.436983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.437038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.437197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.437250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.437392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.437426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.437638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.437684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.437898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.437948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.438192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.438242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.438385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.438408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.438641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.438673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.438866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.438893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.439081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.439131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.439389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.439439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.439675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.439701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.439907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.439932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.440176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.440225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.440417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.440477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.440684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.440710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.440896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.440921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.441088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.441140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.441327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.441375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.441536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.441560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.441744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-10-08 18:43:58.441785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-10-08 18:43:58.441987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.442038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.442293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.442343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.442487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.442511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.442645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.442677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.442833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.442890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.443056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.443106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.443268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.443317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.443482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.443506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.443639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.443671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.443802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.443828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.444018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.444043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.444232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.444256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.444400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.444424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.444604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.444628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.444839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.444890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.445115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.445162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.445302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.445351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.445553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.445577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.445836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.445886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.446105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.446153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.446363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.446411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.446598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.446623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.446903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.446964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.447174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.447224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.447371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.447423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.447559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.447598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.447838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.447889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.448108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.448155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.448328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.448379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.448540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.448563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.448797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.448848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.449124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.449174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.449368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.449420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.449571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.449599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.449797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.449848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.449979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.450027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.450199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.450246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.450350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.450392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.450536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-10-08 18:43:58.450561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-10-08 18:43:58.450754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.450810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.451005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.451029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.451184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.451208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.451382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.451406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.451612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.451657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.451819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.451867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.452020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.452079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.452273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.452325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.452541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.452565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.452717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.452758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.452911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.452960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.453184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.453232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.453377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.453401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.453533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.453558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.453702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.453727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.453928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.453966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.454176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.454200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.454298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.454338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.454496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.454520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.454742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.454792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.454990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.455038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.455250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.455300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.455551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.455575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.455828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.455876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.456177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.456226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.456476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.456524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.456765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.456791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.457001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.457059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.457239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.457292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.457405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.457429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.457590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.457615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.457827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.457879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.458120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.458169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.458306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.458354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-10-08 18:43:58.458514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-10-08 18:43:58.458538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.458679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.458719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.458895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.458946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.459105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.459155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.459361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.459414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.459722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.459786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.459984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.460036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.460296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.460343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.460505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.460529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.460724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.460788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.461028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.461077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.461344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.461392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.461551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.461575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.461767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.461818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.461951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.462009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.462238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.462288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.462535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.462559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.462787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.462837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.463092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.463142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.463355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.463399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.463592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.463617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.463794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.463853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.464100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.464150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.464383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.464430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.464583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.464607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.464824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.464851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.464993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.465041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.465285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.465333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.465467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.465490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.465686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.465728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.465892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.465940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.466171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.466227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.466473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.466497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.466708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.466733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.466978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.467028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.467174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.467225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.467465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.467489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.467681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.467733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.467879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.467929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-10-08 18:43:58.468107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-10-08 18:43:58.468157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.468295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.468351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.468584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.468607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.468872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.468922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.469124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.469174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.469333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.469383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.469555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.469580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.469777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.469826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.470108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.470151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.470288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.470337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.470512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.470537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.470795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.470845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.470997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.471049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.471294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.471342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.471545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.471569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.471742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.471783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.471974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.472026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.472218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.472266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.472517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.472541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.472781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.472831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.472985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.473028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.473285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.473344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.473492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.473515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.473646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.473677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.473825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.473879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.474041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.474065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.474302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.474326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.474473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.474497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.474631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.474677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.474904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.474930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.475092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.475116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.475224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.475263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.475436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.475479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.475664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.475691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.475894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.475951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.476100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.476150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.476394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.476445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.476656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.476683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.476784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.476810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.477070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.477127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.477301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.477349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.477537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.477563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-10-08 18:43:58.477741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-10-08 18:43:58.477765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.477941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.477989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.478178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.478228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.478431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.478481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.478750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.478775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.479036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.479087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.479293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.479341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.479490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.479514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.479625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.479654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.479820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.479883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.480139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.480189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.480349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.480436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.480577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.480616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.480841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.480893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.481062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.481110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.481233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.481272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.481467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.481491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.481631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.481687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.481888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.481913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.482128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.482152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.482288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.482340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.482583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.482606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.482816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.482864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.483039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.483090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.483336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.483384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.483554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.483577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.483731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.483784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.483992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.484041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.484190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.484242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.484373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.484412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.484571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.484620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.484774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.484827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.485115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.485162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.485388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.485438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.485582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.485605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.485753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.485840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.486071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.486095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.486288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.486339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.486554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.486578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.486729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.486779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.487005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.487055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-10-08 18:43:58.487267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-10-08 18:43:58.487317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.487455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.487479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.487655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.487694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.487923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.487982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.488195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.488241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.488451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.488499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.488710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.488783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.489013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.489063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.489305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.489356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.489603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.489627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.489795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.489821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.489952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.490007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.490174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.490214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.490423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.490472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.490680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.490705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.490882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.490907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.491093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.491142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.491301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.491351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.491558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.491582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.491780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.491830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.492090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.492139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.492338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.492387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.492557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.492581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.492904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.492967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.493169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.493216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.493433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.493485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.493690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.493715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.493941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.493980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.494194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.494243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.494457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.494511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.494724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.494749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.495002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.495051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.495265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.495313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.495513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.495537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.495755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.495813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.496048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.496092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-10-08 18:43:58.496264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-10-08 18:43:58.496314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.496494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.496518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.496723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.496786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.497046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.497101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.497240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.497288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.497463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.497487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.497672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.497712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.497960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.498010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.498196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.498247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.498453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.498503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.498729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.498778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.498963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.499013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.499224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.499273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.499439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.499463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.499697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.499722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.499886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.499934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.500144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.500192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.500440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.500489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.500716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.500765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.500994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.501049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.501300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.501348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.501585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.501609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.501845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.501893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.502142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.502191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.502339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.502382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.502620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.502644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.502892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.502916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.503162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.503212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.503483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.503535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.503765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.503791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.504017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.504062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.504171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.504221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.504413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.504459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.504697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.504727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.504964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.505017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.505249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.505298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.505495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.505551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.505706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.505766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.506003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.506054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.506266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.506315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-10-08 18:43:58.506497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-10-08 18:43:58.506521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.506689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.506714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.506872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.506922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.507118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.507168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.507349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.507400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.507566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.507590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.507806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.507856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.508059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.508110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.508333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.508383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.508586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.508610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.508807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.508859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.509004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.509074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.509274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.509323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.509480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.509504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.509637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.509677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.509871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.509920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.510078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.510131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.510240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.510279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.510447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.510472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.510605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.510629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.510842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.510882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.511105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.511129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.511344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.511393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.511559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.511583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.511827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.511877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.512097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.512145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.512369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.512418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.512557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.512581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.512801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.512851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.513059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.513109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.513298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.513348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.513533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.513557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.513757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.513806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.514057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.514112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.514359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.514409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.514549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.514573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.514772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.514824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.515082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.515131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.515333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.515383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.515534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.515558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.515746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.515796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.516025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.516072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-10-08 18:43:58.516332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.516381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1341695 Killed "${NVMF_APP[@]}" "$@" 00:33:30.036 [2024-10-08 18:43:58.516540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-10-08 18:43:58.516563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.516738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.516806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.516995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.517047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:30.037 [2024-10-08 18:43:58.517267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.517317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:30.037 [2024-10-08 18:43:58.517517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.517542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:30.037 [2024-10-08 18:43:58.517795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.517844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.037 [2024-10-08 18:43:58.518058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.518106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.518364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.518413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.518534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.518559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.518723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.518776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.518993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.519043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.519207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.519259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.519444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.519468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.519648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.519704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.519848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.519901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.520070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.520121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.520309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.520358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.520533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.520558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.520694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.520721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.520838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.520901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.521083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.521133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.521274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.521324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.521524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.521549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.521753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.521805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.521960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.521985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.522241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.522289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.522437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.522462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.522619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.522665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1342255 00:33:30.037 [2024-10-08 18:43:58.522878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1342255 00:33:30.037 [2024-10-08 18:43:58.522904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:30.037 [2024-10-08 18:43:58.523063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.523117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1342255 ']' 00:33:30.037 [2024-10-08 18:43:58.523315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.523367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:30.037 [2024-10-08 18:43:58.523557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.523581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.037 [2024-10-08 18:43:58.523721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.037 [2024-10-08 18:43:58.523772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:30.037 [2024-10-08 18:43:58.523965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.524014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 18:43:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.037 [2024-10-08 18:43:58.524161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.524212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.524392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.524428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.524609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.524665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.524795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-10-08 18:43:58.524844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-10-08 18:43:58.524963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-10-08 18:43:58.525013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-10-08 18:43:58.525154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-10-08 18:43:58.525197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-10-08 18:43:58.525361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-10-08 18:43:58.525387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-10-08 18:43:58.525576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-10-08 18:43:58.525602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-10-08 18:43:58.525785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-10-08 18:43:58.525837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-10-08 18:43:58.526026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-10-08 18:43:58.526070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-10-08 18:43:58.526195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-10-08 18:43:58.526241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-10-08 18:43:58.526400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-10-08 18:43:58.526428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-10-08 18:43:58.526597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.526622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.526763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.526814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.526932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.526982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.527140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.527194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.527373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.527399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.527505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.527531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.527630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.527665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.527785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.527812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.527944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.527974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.528137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.528163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.528297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.528325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.528455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.528482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.528608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.528636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.528789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.528816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.528962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.528989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.529118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.529144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.529300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.529327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.529458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.529486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.529621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.529678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.529816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.529843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.530007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.530037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.530133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.530160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.530301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.530328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.530484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.530511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.530643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.530680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.530808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.530835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.530963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.530990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.531104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.531131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.531279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.531306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.531408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.531434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.305 [2024-10-08 18:43:58.531565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.305 [2024-10-08 18:43:58.531596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.305 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.531680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.531708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.531864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.531891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.532017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.532054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.532205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.532232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.532344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.532370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.532526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.532552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.532673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.532701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.532862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.532913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.533044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.533092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.533245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.533271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.533370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.533396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.533528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.533555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.533673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.533700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.533867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.533914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.534041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.534088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.534236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.534262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.534389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.534416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.534532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.534559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.534708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.534736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.534861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.534887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.535012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.535039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.535173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.535200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.535324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.535351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.535515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.535542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.535632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.535668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.535822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.535848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.535987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.536013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.536117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.536143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.536256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.536283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.536409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.536435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.536600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.536627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.536778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.536826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.536955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.536981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.537131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.537158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.537258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.537285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.537409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.537435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.537586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.537612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.537747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.537774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.537870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.537896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.538022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.538054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.538182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.538209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.538326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.538352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.538502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.538530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.538645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.538679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.306 [2024-10-08 18:43:58.538830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.306 [2024-10-08 18:43:58.538856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.306 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.538980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.539006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.539155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.539182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.539268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.539294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.539400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.539426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.539576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.539603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.539752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.539800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.539928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.539955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.540043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.540070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.540201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.540228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.540359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.540385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.540497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.540524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.540657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.540684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.540834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.540860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.540955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.540982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.541106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.541132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.541253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.541279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.541373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.541399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.541547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.541573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.541698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.541725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.541845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.541871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.542017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.542043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.542199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.542226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.542386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.542412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.542543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.542569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.542687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.542713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.542841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.542889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.543064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.543110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.543259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.543286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.543405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.543431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.543528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.543554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.543670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.543697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.543825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.543851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.543954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.543980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.544099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.544125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.544278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.544309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.544438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.544464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.544559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.544586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.544725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.544774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.544933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.544959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.545077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.545103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.545233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.545259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.545386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.545412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.545501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.545528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.545665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.545692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.545853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.545901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.307 qpair failed and we were unable to recover it. 00:33:30.307 [2024-10-08 18:43:58.546054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.307 [2024-10-08 18:43:58.546080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.546231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.546257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.546351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.546377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.546501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.546528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.546660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.546687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.546839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.546865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.547016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.547042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.547164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.547191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.547294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.547320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.547446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.547472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.547571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.547597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.547761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.547788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.547940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.547966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.548068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.548119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.548239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.548266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.548417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.548443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.548605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.548632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.548795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.548831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.549035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.549097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.549238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.549288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.549450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.549477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.549602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.549628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.549786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.549833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.549997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.550043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.550176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.550221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.550373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.550399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.550554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.550580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.550743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.550792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.550913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.550967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.551123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.551170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.551300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.551327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.551478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.551504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.551605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.551662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.551837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.551885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.552065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.552121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.552267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.552319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.552446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.552473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.552596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.552622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.552719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.552746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.552886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.552938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.553059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.553085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.553204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.553230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.553377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.553403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.553559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.553585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.553749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.553810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.553975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.308 [2024-10-08 18:43:58.554022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.308 qpair failed and we were unable to recover it. 00:33:30.308 [2024-10-08 18:43:58.554122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.554159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.554327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.554353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.554446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.554472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.554621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.554647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.554823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.554870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.555012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.555069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.555233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.555301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.555426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.555452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.555577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.555604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.555760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.555788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.555945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.555972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.556088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.556114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.556208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.556234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.556388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.556414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.556530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.556557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.556659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.556686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.556781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.556807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.556957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.556984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.557141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.557168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.557287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.557313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.557440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.557467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.557615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.557642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.557777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.557804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.557954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.557984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.558073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.558099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.558259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.558304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.558453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.558479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.558637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.558672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.558826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.558878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.558999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.559047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.559209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.559257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.559375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.559402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.559526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.559552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.559646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.559680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.559805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.559833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.559959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.559985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.560110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.560136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.560265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.560291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.560426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.560452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.560546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.560572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.560700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.560726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.309 qpair failed and we were unable to recover it. 00:33:30.309 [2024-10-08 18:43:58.560818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.309 [2024-10-08 18:43:58.560844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.560964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.560990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.561093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.561119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.561245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.561271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.561361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.561387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.561536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.561562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.561684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.561711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.561837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.561863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.562011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.562037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.562189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.562216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.562365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.562392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.562517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.562543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.562637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.562669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.562826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.562873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.562969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.563019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.563146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.563194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.563318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.563345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.563494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.563520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.563628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.563665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.563788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.563815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.563969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.563995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.564086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.564113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.564263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.564294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.564420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.564447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.564596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.564622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.564751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.564778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.564897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.564923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.565050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.565076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.565228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.565254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.565349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.565375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.565467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.565494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.565641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.565685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.565838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.565864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.565982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.566008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.566135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.566161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.566286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.566312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.566465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.566492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.566638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.566673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.566769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.566795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.566885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.566911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.567032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.567058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.567208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.567234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.567328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.567354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.567445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.567471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.567622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.567648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.567785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.567832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.567955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.567981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.568103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.568129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.568216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.310 [2024-10-08 18:43:58.568243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.310 qpair failed and we were unable to recover it. 00:33:30.310 [2024-10-08 18:43:58.568361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.568388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.568540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.568566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.568648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.568683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.568799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.568825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.568973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.569000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.569124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.569150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.569301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.569327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.569451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.569477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.569603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.569629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.569788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.569815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.569929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.569956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.570052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.570078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.570195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.570222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.570370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.570400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.570557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.570583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.570734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.570784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.570902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.570980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.571074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.571100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.571250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.571276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.571389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.571415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.571561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.571587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.571674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.571701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.571818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.571845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.571997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.572023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.572186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.572235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.572335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.572362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.572486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.572512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.572638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.572679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.572847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.572891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.573035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.573083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.573198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.573224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.573340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.573366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.573463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.573489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.573642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.573680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.573777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.573803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.573933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.573960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.574110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.574136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.574237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.574264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.574410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.574437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.574558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.574584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.574736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.574799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.574949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.574976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.575120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.575166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.575282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.575308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.575427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.575453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.575542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.575569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.575686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.575713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.575817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.575843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.575935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.311 [2024-10-08 18:43:58.575962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.311 qpair failed and we were unable to recover it. 00:33:30.311 [2024-10-08 18:43:58.576088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.576114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.576245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.576271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.576399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.576426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.576577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.576603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.576717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.576749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.576847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.576876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.577008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.577035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.577158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.577184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.577307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.577333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.577485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.577512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.577625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.577681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.577849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.577876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.577997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.578023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.578143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.578170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.578338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.578365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.578453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.578478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.578576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.578602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.578704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.578730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.578854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.578880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.579001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.579028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.579143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.579169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.579290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.579316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.579432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.579458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.579569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.579595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.579718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.579759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.579881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.579908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.580028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.580054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.580204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.580230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.580384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.580410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.580521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.580547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.580676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.580702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.580850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.580892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.581023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.581051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.581205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.581231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.581361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.581387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.581478] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:33:30.312 [2024-10-08 18:43:58.581514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.581541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.581566] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.312 [2024-10-08 18:43:58.581667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.581715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.581952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.581999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.582168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.582212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.582374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.582421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.582570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.582597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.582749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.582801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.582956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.583005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.583130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.312 [2024-10-08 18:43:58.583160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.312 qpair failed and we were unable to recover it. 00:33:30.312 [2024-10-08 18:43:58.583322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.583348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.583471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.583512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.583662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.583690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.583842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.583869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.583960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.583987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.584093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.584133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.584290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.584317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.584479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.584504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.584591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.584617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.584784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.584833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.585007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.585054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.585219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.585255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.585424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.585450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.585605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.585632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.585770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.585822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.585969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.586023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.586164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.586219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.586361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.586388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.586503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.586529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.586664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.586692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.586786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.586814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.586906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.586932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.587021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.587047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.587170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.587197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.587285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.587312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.587454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.587503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.587664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.587703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.587834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.587862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.588021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.588047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.588197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.588224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.588373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.588399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.588527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.588554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.588711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.588769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.588959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.589054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.589322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.589415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.589640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.589673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.589767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.589793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.589972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.590017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.590125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.590161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.590333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.590380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.590524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.590550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.590677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.590745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.590946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.591038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.591334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.591423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.591640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.591673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.591794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.591820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.591961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.591987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.592104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.592130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.313 [2024-10-08 18:43:58.592247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.313 [2024-10-08 18:43:58.592273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.313 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.592397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.592423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.592540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.592566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.592689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.592716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.592806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.592832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.592991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.593017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.593157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.593183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.593308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.593334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.593438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.593464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.593588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.593615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.593738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.593765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.593893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.593919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.594058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.594084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.594192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.594218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.594334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.594360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.594456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.594483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.594633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.594673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.594798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.594824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.594946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.594976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.595070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.595096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.595236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.595274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.595426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.595453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.595590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.595616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.595744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.595771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.595860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.595885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.596032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.596057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.596206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.596271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.596472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.596537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.596779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.596806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.596957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.597022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.597252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.597318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.597497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.597562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.597781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.597809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.597942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.597990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.598163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.598212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.598328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.598387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.598521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.598546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.598661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.598687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.598831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.598857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.598976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.599001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.599138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.599178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.599293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.599319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.599482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.599507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.599646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.599679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.599830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.599855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.600017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.600042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.600148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.600187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.600310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.600335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.600483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.600508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.600664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.600690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.600815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.600841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.600970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.601010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.601154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.314 [2024-10-08 18:43:58.601178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.314 qpair failed and we were unable to recover it. 00:33:30.314 [2024-10-08 18:43:58.601304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.601330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.601450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.601475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.601637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.601686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.601820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.601845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.601975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.601999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.602172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.602200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.602343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.602367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.602530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.602555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.602726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.602752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.602864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.602918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.603076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.603126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.603294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.603319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.603433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.603458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.603566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.603591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.603753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.603791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.603954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.604022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.604254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.604319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.604495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.604561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.604776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.604804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.604962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.605018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.605202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.605251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.605403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.605427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.605564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.605589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d0000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.605821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.605921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.606144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.606212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.606447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.606513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.606745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.606771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.606983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.607048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.607250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.607314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.607525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.607548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.607693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.607718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.607813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.607838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.608051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.608075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.608258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.608323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.608525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.608589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.608824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.608851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.609068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.609133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.609335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.609400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.609576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.609614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.609763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.609789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.609913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.609975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.610215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.610280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.610485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.610550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.610778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.610804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.610900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.610926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.611115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.611192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.611399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.611464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.611683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.611709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.611834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.611859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.612001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.612065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.612291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.612355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.612589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.315 [2024-10-08 18:43:58.612667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.315 qpair failed and we were unable to recover it. 00:33:30.315 [2024-10-08 18:43:58.612854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.612880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.612989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.613028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.613163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.613201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.613335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.613400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.613632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.613676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.613778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.613804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.613932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.613972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.614172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.614196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.614379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.614443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.614677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.614721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.614818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.614843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.614988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.615012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.615169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.615234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.615444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.615482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.615619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.615707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.615918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.615982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.616238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.616262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.616423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.616486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.616726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.616792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.617035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.617059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.617222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.617288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.617520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.617585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.617836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.617862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.617986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.618025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.618190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.618255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.618462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.618485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.618646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.618737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.618955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.619020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.619225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.619248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.619409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.619473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.619648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.619726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.619971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.620009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.620112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.620170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.620372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.620448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.620668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.620707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.620841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.620868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.621028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.621094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.621321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.621344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.621523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.621589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.621805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.621831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.621980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.622005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.622119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.622170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.622370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.622435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.622661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.622700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.622878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.622943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.623157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.623236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.623486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.623509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.623646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.623731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.623950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.624014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.316 [2024-10-08 18:43:58.624241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.316 [2024-10-08 18:43:58.624264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.316 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.624373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.624397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.624556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.624621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.624858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.624884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.625009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.625033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.625244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.625308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.625523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.625546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.625680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.625705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.625914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.625979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.626202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.626226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.626412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.626475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.626717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.626783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.627030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.627054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.627203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.627267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.627476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.627541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.627775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.627800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.627971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.628035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.628273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.628338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.628562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.628627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.628820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.628845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.628996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.629061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.629275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.629298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.629455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.629517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.629750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.629816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.630058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.630102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.630195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.630221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.630372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.630407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.630554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.630594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.630742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.630780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.630973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.631037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.631316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.631340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.631486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.631534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.631767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.631833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.632059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.632084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.632226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.632266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.632480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.632545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.632766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.632793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.632940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.632992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.633260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.633325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.633545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.633569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.633758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.633823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.634062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.634126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.634374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.634398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.634580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.634644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.634901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.634966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.635234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.635257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.635412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.635478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.635737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.635763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.635903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.635927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.636044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.636093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.636295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.636360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.636589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.636613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.636811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.636876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.637124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.637189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.637430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.637454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.637618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.637716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.317 [2024-10-08 18:43:58.637932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.317 [2024-10-08 18:43:58.637997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.317 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.638266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.638290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.638449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.638512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.638767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.638834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.639088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.639112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.639305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.639369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.639617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.639701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.639920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.639944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.640121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.640203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.640418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.640484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.640741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.640767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.640914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.640979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.641205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.641278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.641490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.641514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.641713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.641779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.641985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.642051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.642305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.642329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.642476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.642541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.642781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.642808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.643003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.643027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.643189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.643258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.643513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.643577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.643807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.643834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.643995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.644043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.644227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.644292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.644549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.644574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.644728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.644768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.644982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.645046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.645304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.645328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.645492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.645556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.645837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.645902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.646137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.646161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.646332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.646396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.646604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.646683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.646925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.646950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.647142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.647207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.647420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.647485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.647746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.647772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.647944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.648008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.648247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.648312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.648493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.648517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.648618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.648665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.648824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.648889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.649126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.649151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.649335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.649399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.649637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.649727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.649880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.649904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.650085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.650150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.650380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.650456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.650730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.650756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.650932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.650997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.651249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.651314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.651480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.651504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.651674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.651717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.651957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.652023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.652275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.652299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.318 [2024-10-08 18:43:58.652464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.318 [2024-10-08 18:43:58.652528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.318 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.652746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.652812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.653066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.653090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.653245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.653309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.653540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.653606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.653874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.653898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.654050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.654115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.654356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.654420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.654634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.654679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.654853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.654877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.655095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.655158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.655419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.655443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.655605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.655686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.655942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.656006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.656274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.656299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.656495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.656559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.656807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.656833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.656963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.656988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.657108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.657133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.657284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.657349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.657592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.657616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.657823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.657889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.658135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.658200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.658432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.658455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.658634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.658719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.658968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.659032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.659249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.659273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.659467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.659531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.659738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.659804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.660059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.660083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.660235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.660299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.660515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.660579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.660846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.660876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.661023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.661087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.661297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.661362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.661622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.661646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.661757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.661831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.662078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.662143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.662352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.662376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.662517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.662566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.662820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.662886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.663135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.663159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.663269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.663308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.663430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.663495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.663757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.663783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.663901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.663949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.664177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.664248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.664519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.664584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.664819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.664845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.665044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.665109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.665339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.665364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.665532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.319 [2024-10-08 18:43:58.665596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.319 qpair failed and we were unable to recover it. 00:33:30.319 [2024-10-08 18:43:58.665844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.665871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.666050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.666074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.666267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.666331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.666542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.666607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.666848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.666873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.666975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.667000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.667233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.667299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.667540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.667565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.667734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.667800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.668058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.668122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.668346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.668371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.668565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.668630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.668838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.668904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.669150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.669174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.669358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.669422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.669706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.669773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.670033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.670058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.670193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.670258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.670496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.670560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.670812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.670838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.670994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.671058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.671290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.671355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.671579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.671603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.671809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.671874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.672122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.672188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.672410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.672435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.672578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.672646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.672853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.672878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.673009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.673049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.673206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.673269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.673529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.673593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.673858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.673884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.674049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.674113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.674361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.674426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.674645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.674691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.674886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.674951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.675159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.675223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.675467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.675491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.675669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.675735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.675949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.676014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.676258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.676282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.676453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.676518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.676747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.676813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.677039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.677063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.677170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.677195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.677431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.677495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.677737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.677763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.677925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.677999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.678238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.678303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.678507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.678531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.678744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.678809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.679058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.679123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.679382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.679406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.679574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.679638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.679818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.679844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.679968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.680008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.680236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.680300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.680556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.680620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.320 qpair failed and we were unable to recover it. 00:33:30.320 [2024-10-08 18:43:58.680865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.320 [2024-10-08 18:43:58.680891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.681022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.681066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.681308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.681373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.681572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:30.321 [2024-10-08 18:43:58.681645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.681691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.681781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.681857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.682075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.682139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.682386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.682410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.682581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.682644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.682914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.682979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.683228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.683253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.683368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.683447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.683697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.683763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.684009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.684033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.684201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.684266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.684508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.684573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.684819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.684845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.685003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.685068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.685321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.685386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.685606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.685645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.685768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.685823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.686072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.686136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.686385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.686409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.686592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.686669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.686863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.686889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.687016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.687041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.687216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.687281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.687493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.687557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.687817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.687843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.687999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.688063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.688317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.688381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.688590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.688614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.688763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.688827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.689031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.689096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.689337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.689361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.689552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.689617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.689882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.689945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.690186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.690210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.690397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.690462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.690714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.690780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.691026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.691051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.691213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.691278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.691545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.691609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.691870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.691902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.692075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.692140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.692417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.692482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.692701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.692741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.692888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.692914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.693116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.693181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.693407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.693432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.693594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.693619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.693839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.693865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.694030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.694069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.694260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.694323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.694528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.694593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.694854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.694879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.695049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.695113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.695367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.695432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.695664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.695690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.695823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.695896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.321 [2024-10-08 18:43:58.696146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.321 [2024-10-08 18:43:58.696211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.321 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.696444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.696468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.696642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.696720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.696962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.697027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.697266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.697290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.697447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.697513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.697730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.697798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.698015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.698040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.698196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.698245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.698419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.698484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.698763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.698788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.698938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.699002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.699224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.699289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.699543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.699568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.699735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.699802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.700055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.700120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.700373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.700397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.700537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.700561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.700856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.700884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.701031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.701055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.701245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.701309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.701524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.701588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.701837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.701862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.702040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.702115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.702355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.702419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.702680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.702722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.702870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.702933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.703147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.703211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.703424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.703448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.703642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.703747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.703932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.703997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.704239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.704263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.704412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.704476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.704701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.704769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.704996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.705020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.705219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.705266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.705511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.705577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.705842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.705867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.706045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.706111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.706319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.706384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.706611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.706654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.706844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.706908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.707161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.707227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.707468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.707492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.707714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.707739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.707851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.707877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.708037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.708062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.708213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.708292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.708539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.708603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.708823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.708849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.708988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.709050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.709267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.709332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.709550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.709574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.709770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.709841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.710082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.710146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.322 [2024-10-08 18:43:58.710375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.322 [2024-10-08 18:43:58.710399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.322 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.710591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.710670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.710930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.710995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.711217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.711242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.711442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.711508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.711745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.711813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.712036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.712060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.712208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.712248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.712460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.712537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.712798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.712824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.712974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.713039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.713287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.713352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.713559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.713583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.713776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.713843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.714100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.714164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.714389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.714413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.714574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.714640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.714857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.714883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.715014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.715039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.715177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.715255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.715511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.715576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.715836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.715862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.715972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.716035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.716270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.716336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.716569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.716594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.716782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.716849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.717095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.717159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.717371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.717395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.717575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.717629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.717861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.717925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.718197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.718222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.718381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.718445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.718674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.718740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.718955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.718995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.719184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.719248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.719517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.719583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.719871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.719897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.720041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.720109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.720293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.720358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.720605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.720630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.720816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.720881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.721093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.721158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.721363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.721388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.721531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.721570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.721726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.721753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.721858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.721884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.722018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.722043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.722247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.722313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.722554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.722583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.722786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.722853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.723064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.723128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.723386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.723410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.723579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.723644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.723924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.723989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.724205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.724229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.724324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.724350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.724521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.724587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.724866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.724891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.725058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.725127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.323 [2024-10-08 18:43:58.725350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.323 [2024-10-08 18:43:58.725415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.323 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.725647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.725694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.725851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.725916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.726140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.726205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.726449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.726474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.726673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.726740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.727007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.727072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.727311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.727336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.727511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.727576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.727828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.727895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.728146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.728171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.728372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.728436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.728678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.728736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.728881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.728907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.729089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.729154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.729390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.729455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.729721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.729748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.729895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.729960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.730143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.730209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.730452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.730477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.730670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.730736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.730954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.731020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.731300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.731325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.731531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.731595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.731875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.731941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.732158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.732183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.732391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.732456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.732696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.732763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.733028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.733053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.733253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.733328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.733570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.733634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.733911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.733936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.734041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.734102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.734327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.734391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.734659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.734685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.734873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.734948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.735160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.735224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.735459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.735523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.735757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.735782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.735927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.735992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.736247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.736271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.736433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.736499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.736731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.736797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.737057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.737082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.737191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.737261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.737477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.737542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.737777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.737803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.737927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.737985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.738194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.738260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.738471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.738496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.738681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.738750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.738975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.739039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.739291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.739315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.739497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.739561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.739854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.739920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.740177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.740201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.740378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.740443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.740684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.740752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.740992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.741017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.741178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.741242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.741474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.741538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.741805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.741831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.741913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.741938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.324 qpair failed and we were unable to recover it. 00:33:30.324 [2024-10-08 18:43:58.742157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.324 [2024-10-08 18:43:58.742221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.742504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.742569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.742787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.742813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.742972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.743038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.743291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.743316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.743471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.743542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.743810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.743896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.744135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.744161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.744353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.744418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.744683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.744749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.744963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.745002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.745174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.745239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.745476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.745541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.745762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.745788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.745925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.745969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.746240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.746305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.746554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.746579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.746712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.746764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.747013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.747079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.747322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.747346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.747528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.747593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.747861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.747926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.748138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.748162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.748276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.748301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.748499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.748565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.748797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.748822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.748921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.748947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.749183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.749250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.749484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.749549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.749819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.749846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.750007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.750074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.750331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.750356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.750518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.750583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.750888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.750955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.751204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.751228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.751391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.751456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.751719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.751785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.752064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.752088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.752243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.752317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.752525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.752590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.752842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.752869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.752984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.753028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.753269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.753333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.753572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.753597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.753728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.753774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.754026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.754091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.754323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.754352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.754526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.754592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.754856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.754922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.755167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.755192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.755376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.755442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.755669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.755735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.755978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.756018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.756213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.756279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.756533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.756599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.756865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.756892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.757054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.325 [2024-10-08 18:43:58.757119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.325 qpair failed and we were unable to recover it. 00:33:30.325 [2024-10-08 18:43:58.757365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.757430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.757707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.757733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.757859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.757884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.758060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.758127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.758347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.758372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.758483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.758508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.758736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.758762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.758923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.758963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.759097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.759162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.759414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.759480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.759658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.759699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.759838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.759893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.760140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.760205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.760452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.760476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.760637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.760715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.760996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.761060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.761302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.761327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.761507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.761572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.761837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.761903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.762150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.762174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.762334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.762399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.762648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.762732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.762940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.762965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.763174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.763239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.763471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.763536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.763765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.763792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.763957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.764024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.764247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.764311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.764557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.764582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.764700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.764745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.764953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.765017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.765268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.765305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.765479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.765550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.765810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.765837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.765966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.765992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.766111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.766136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.766387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.766451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.766690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.766731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.766921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.766986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.767251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.767316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.767558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.767582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.767747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.767812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.768059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.768124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.768348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.768374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.768512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.768582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.768817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.768884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.769105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.769130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.769323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.769388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.769616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.769708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.769933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.769958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.770097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.770163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.770376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.770440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.770698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.770725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.770874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.770938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.771173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.771239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.771480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.771505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.771707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.771774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.772013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.772078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.772317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.772342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.772544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.772609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.772786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.772811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.772929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.772956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.773139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.773164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.773366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.773429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.326 [2024-10-08 18:43:58.773661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.326 [2024-10-08 18:43:58.773701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.326 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.773850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.773915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.774153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.774218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.774422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.774446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.774595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.774637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.774822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.774863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.774995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.775035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.775251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.775292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.775469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.775509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.775687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.775713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.775841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.775911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.776136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.776210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.776520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.776555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.776781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.776873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.777145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.777233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.777511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.777561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.777785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.777835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.778159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.778247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.778498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.778535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.778779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.778868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.779178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.779265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.779495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.779523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.779702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.779739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.779898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.779934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.780109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.780150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.780312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.780397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.780676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.780768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.781054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.781087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.781284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.781375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.781613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.781658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.781794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.781824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.781994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.782043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.782234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.782270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.782432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.782462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.782618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.782691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.782947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.783035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.783352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.783445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.783743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.783779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.783934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.784020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.784318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.784357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.784522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.784559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.784714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.784752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.784892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.784953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.785141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.785191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.785431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.785521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.785812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.785848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.786052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.786144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.786417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.786504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.786757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.786797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.786936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.786963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.787090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.787116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.787265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.787300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.787430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.787479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.787704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.787742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.787965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.788059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.788331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.788418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.788722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.788814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.789111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.789149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.789311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.789347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.789466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.789503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.327 qpair failed and we were unable to recover it. 00:33:30.327 [2024-10-08 18:43:58.789636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.327 [2024-10-08 18:43:58.789681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.789868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.789902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.790096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.790144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.790453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.790541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.790833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.790923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.791188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.791222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.791447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.791498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.791687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.791725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.791914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.791950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.792142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.792167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.792355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.792404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.792613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.792722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.793044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.793152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.793451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.793485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.793685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.793777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.794041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.794080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.794263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.794300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.794432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.794472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.794692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.794742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.794980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.795067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.795364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.795455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.795749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.795784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.796000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.796087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.796379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.796418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.796570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.796605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.796780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.796807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.796943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.796991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.797177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.797226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.797435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.797523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.797780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.797822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.797994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.798080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.798376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.798463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.798751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.798790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.798930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.798957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.799135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.799170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.799305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.799345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.799518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.799566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.799807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.799843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.800008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.800100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.800417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.800504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.800823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.800913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.801212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.801250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.801426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.801474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.801793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.801881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.802177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.802265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.802542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.802577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.802731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.802782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.803049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.803087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.803278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.803314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.803467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.803494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.803664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.803721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.803957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.804044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.804340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.804444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.804746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.804781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.804995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.805082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.805354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.805393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.805551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.805588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.805798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.805827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.805974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.806010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.806227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.806275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.806503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.806592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.806903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.806940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.807138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.807226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.807523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.807611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.328 qpair failed and we were unable to recover it. 00:33:30.328 [2024-10-08 18:43:58.807932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.328 [2024-10-08 18:43:58.807970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.808154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.808180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.808358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.808394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.808581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.808631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.808841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.808877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.809070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.809106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.809320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.809411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.809697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.809788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.810100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.810149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.810349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.810376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.810525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.810561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.810739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.810776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.810955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.811004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.811174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.811209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.811352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.811427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.811746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.811836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.812138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.812226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.812493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.812531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.812676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.812713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.812883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.812918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.813032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.813067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.813221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.813249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.813371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.813407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.813617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.813728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.814000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.814089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.814383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.814417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.814569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.814675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.814957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.815038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.815244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.815289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.815466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.815493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.815674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.815715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.815937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.815986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.816225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.816312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.816598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.816632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.816787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.816837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.817153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.817253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.817469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.817507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.817697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.817725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.817883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.817918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.818084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.818139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.818372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.818460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.818742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.818779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.818987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.819075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.819379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.819467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.819763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.819792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.819931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.819957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.820094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.820143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.820276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.820312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.820486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.820534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.820753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.820790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.820977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.821067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.821371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.821458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.821731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.821820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.822092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.822142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.822340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.822389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.822704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.822795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.823110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.823198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.823480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.823514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.823771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.823821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.824020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.824093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.824366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.824456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.824764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.824801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.825043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.825131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.825430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.825521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.825726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.825764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.825945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.825971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.826148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.826183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.826377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.329 [2024-10-08 18:43:58.826425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.329 qpair failed and we were unable to recover it. 00:33:30.329 [2024-10-08 18:43:58.826692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.826795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.827086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.827120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.827338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.827428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.827701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.827792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.828098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.828137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.828289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.828315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.828475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.828523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.828707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.828754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.828933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.828982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.829225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.829261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.829455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.829545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.829868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.829955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.830259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.830349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.830618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.830646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.830818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.830855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.831041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.831077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.831265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.831313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.831492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.831527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.330 [2024-10-08 18:43:58.831683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.330 [2024-10-08 18:43:58.831719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.330 qpair failed and we were unable to recover it. 00:33:30.596 [2024-10-08 18:43:58.831938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.596 [2024-10-08 18:43:58.832030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.596 qpair failed and we were unable to recover it. 00:33:30.596 [2024-10-08 18:43:58.832294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.596 [2024-10-08 18:43:58.832380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.596 qpair failed and we were unable to recover it. 00:33:30.596 [2024-10-08 18:43:58.832674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.596 [2024-10-08 18:43:58.832705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.596 qpair failed and we were unable to recover it. 00:33:30.596 [2024-10-08 18:43:58.832838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.832874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.833016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.833061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.833215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.833250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.833413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.833448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.833607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.833642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.833808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.833845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.834025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.834061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.834243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.834281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.834447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.834473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.834572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.834598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.834737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.834764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.834938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.834974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.835124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.835159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.835285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.835321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.835481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.835564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.835846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.835882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.836030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.836056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.836174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.836200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.836315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.836346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.836468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.836504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.836619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.836671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f18d4000b90 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.836858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.836901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.837157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.837226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.837468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.837495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.837615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.837673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.837896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.837961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.838212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.838276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.838542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.838567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.838702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.838738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.838881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.838916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.839056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.839091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.839259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.839284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.839415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.839441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.839669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.839735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.839967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.840031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.840250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.840276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.840366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.840392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.840553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.840617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.840881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.597 [2024-10-08 18:43:58.840946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.597 qpair failed and we were unable to recover it. 00:33:30.597 [2024-10-08 18:43:58.841198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.841223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.841319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.841366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.841534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.841568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.841716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.841751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.841895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.841920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.842073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.842117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.842272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.842346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.842571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.842607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.842798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.842824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.842922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.842949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.843193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.843256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.843497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.843532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.843668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.843695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.843815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.843840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.843972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.844007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.844175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.844210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.844359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.844385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.844550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.844628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.844863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.844889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.845019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.845053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.845201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.845227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.845387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.845454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.845721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.845787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.846032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.846067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.846222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.846247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.846399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.846447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.846601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.846636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.846783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.846818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.846985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.847011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.847161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.847230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.847479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.847542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.847783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.847809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.847959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.847984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.848105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.848153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.848268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.848304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.848462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.848526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.848765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.848791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.848957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.848991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.849262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.849327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.849582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.849616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.849775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.598 [2024-10-08 18:43:58.849801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.598 qpair failed and we were unable to recover it. 00:33:30.598 [2024-10-08 18:43:58.849927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.849953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.850107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.850141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.850310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.850344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.850487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.850512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.850671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.850727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.850950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.851014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.851199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.851240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.851385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.851410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.851536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.851562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.851676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.851725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.851847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.851875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.852009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.852034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.852182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.852208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.852326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.852351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.852474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.852500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.852602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.852628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.852752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.852778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.852936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.852962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.853083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.853108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.853209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.853234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.853354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.853379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.853500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.853526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.853681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.853683] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.599 [2024-10-08 18:43:58.853709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 [2024-10-08 18:43:58.853723] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of eventsqpair failed and we were unable to recover it. 00:33:30.599 at runtime. 00:33:30.599 [2024-10-08 18:43:58.853741] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.599 [2024-10-08 18:43:58.853755] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.599 [2024-10-08 18:43:58.853767] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.599 [2024-10-08 18:43:58.853850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.853874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.854024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.854065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.854193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.854222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.854327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.854355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.854520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.854545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.854670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.854714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.854862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.854897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.855067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.855101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.855231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.855261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.855436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.855481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.855626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.855668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.855838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.855873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.855845] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:33:30.599 [2024-10-08 18:43:58.855910] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:33:30.599 [2024-10-08 18:43:58.856018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.856043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 [2024-10-08 18:43:58.855980] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.855984] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:33:30.599 [2024-10-08 18:43:58.856170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.599 [2024-10-08 18:43:58.856194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.599 qpair failed and we were unable to recover it. 00:33:30.599 [2024-10-08 18:43:58.856344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.856379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.856522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.856556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.856696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.856722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.856820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.856846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.856975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.857010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.857178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.857212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.857376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.857402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.857501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.857527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.857643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.857685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.857856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.857891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.858042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.858068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.858149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.858175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.858332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.858366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.858479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.858513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.858682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.858708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.858826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.858851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.859017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.859051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.859238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.859272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.859445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.859480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.859667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.859703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.859837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.859866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.859972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.860007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.860150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.860176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.860306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.860332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.860507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.860542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.860687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.860723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.860857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.860882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.861006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.861032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.861171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.861206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.861319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.861353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.861517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.861543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.861669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.861695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.861873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.861907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.862046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.862081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.862228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.862254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.862375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.862400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.862544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.862579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.862747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.862783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.862923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.862949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.863100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.600 [2024-10-08 18:43:58.863145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.600 qpair failed and we were unable to recover it. 00:33:30.600 [2024-10-08 18:43:58.863259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.863294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.863435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.863469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.863610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.863635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.863793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.863819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.863980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.864014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.864149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.864183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.864342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.864368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.864496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.864522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.864726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.864762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.864931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.864966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.865132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.865158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.865311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.865360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.865583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.865619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.865809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.865844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.866015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.866041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.866162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.866203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.866372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.866407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.866575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.866610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.866816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.866841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.866995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.867030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.867173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.867208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.867353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.867393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.867583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.867617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.867817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.867843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.867971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.868005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.868145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.868180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.868294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.868319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.868419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.868446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.868537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.868563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.868662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.868688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.868841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.868867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.868984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.869015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.869145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.869171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.869320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.869354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.869496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.869521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.601 [2024-10-08 18:43:58.869629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.601 [2024-10-08 18:43:58.869667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.601 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.869838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.869873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.870080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.870115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.870284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.870310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.870428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.870454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.870576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.870611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.870789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.870824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.870968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.870993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.871113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.871139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.871313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.871347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.871444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.871478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.871598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.871624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.871792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.871819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.871980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.872021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.872157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.872192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.872311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.872337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.872489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.872515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.872668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.872703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.872873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.872908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.873086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.873112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.873308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.873343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.873555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.873590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.873762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.873797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.873916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.873941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.874090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.874116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.874254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.874288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.874391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.874426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.874609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.874644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.874813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.874838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.874944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.874979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.875149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.875184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.875351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.875376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.875498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.875524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.875710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.875737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.875861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.875887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.876036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.876062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.876182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.876230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.876368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.876403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.876570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.876604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.876744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.876769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.876893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.602 [2024-10-08 18:43:58.876918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.602 qpair failed and we were unable to recover it. 00:33:30.602 [2024-10-08 18:43:58.877077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.877112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.877256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.877290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.877421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.877447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.877594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.877620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.877817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.877852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.877997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.878032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.878197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.878222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.878385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.878432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.878604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.878640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.878808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.878843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.878991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.879017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.879131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.879156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.879293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.879328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.879481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.879522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.879683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.879709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.879825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.879850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.880021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.880056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.880187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.880222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.880364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.880390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.880535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.880560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.880709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.880745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.880889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.880924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.881094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.881119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.881240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.881265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.881436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.881471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.881578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.881612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.881754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.881780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.881940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.881966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.882150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.882185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.882363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.882398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.882569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.882594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.882721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.882765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.882933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.882968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.883107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.883142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.883314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.883339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.883464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.883508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.883677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.883712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.883878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.883912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.884045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.884071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.603 [2024-10-08 18:43:58.884187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.603 [2024-10-08 18:43:58.884212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.603 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.884396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.884431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.884665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.884701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.884944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.884969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.885134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.885169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.885280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.885316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.885484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.885518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.885655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.885681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.885796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.885821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.885951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.885985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.886151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.886185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.886326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.886352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.886442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.886467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.886588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.886623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.886833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.886859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.887016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.887042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.887181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.887216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.887350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.887385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.887515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.887549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.887716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.887742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.887829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.887854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.888031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.888065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.888177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.888212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.888357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.888382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.888502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.888528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.888687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.888722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.888898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.888932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.889081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.889106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.889226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.889252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.889400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.889435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.889612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.889647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.889857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.889882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.890049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.890083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.890230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.890265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.890407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.890442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.890605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.890630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.890756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.890781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.890961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.890996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.891141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.604 [2024-10-08 18:43:58.891175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.604 qpair failed and we were unable to recover it. 00:33:30.604 [2024-10-08 18:43:58.891320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.891346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.891497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.891541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.891706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.891741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.891884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.891925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.892070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.892096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.892216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.892242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.892372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.892407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.892543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.892578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.892708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.892734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.892888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.892913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.893060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.893095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.893230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.893265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.893435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.893460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.893586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.893612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.893793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.893819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.893973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.894008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.894147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.894173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.894302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.894328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.894507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.894542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.894692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.894727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.894885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.894911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.895040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.895065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.895244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.895278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.895446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.895481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.895664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.895694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.895792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.895818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.895995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.896030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.896200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.896235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.896373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.896398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.896546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.896571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.896706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.896742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.896893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.896928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.897079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.897104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.897201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.897227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.897353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.897388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.605 qpair failed and we were unable to recover it. 00:33:30.605 [2024-10-08 18:43:58.897496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.605 [2024-10-08 18:43:58.897530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.897697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.897724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.897852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.897897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.898032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.898066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.898208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.898243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.898391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.898417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.898569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.898615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.898796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.898831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.898976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.899010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.899180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.899206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.899327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.899352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.899508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.899543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.899688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.899724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.899856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.899882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.900001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.900027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.900155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.900189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.900356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.900391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.900496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.900522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.900677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.900726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.900846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.900872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.900998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.901033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.901202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.901228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.901320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.901346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.901493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.901528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.901665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.901701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.901869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.901894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.902022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.902048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.902229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.902265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.902399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.902434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.902577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.902602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.902726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.902752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.902893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.902928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.903099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.903134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.903311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.903336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.903457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.903500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.903673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.903709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.903843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.903883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.904032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.904058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.904211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.904237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.904394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.606 [2024-10-08 18:43:58.904429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.606 qpair failed and we were unable to recover it. 00:33:30.606 [2024-10-08 18:43:58.904574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.904609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.904766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.904792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.904941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.904987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.905128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.905163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.905334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.905369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.905534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.905560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.905709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.905755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.905897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.905933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.906077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.906112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.906255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.906281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.906377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.906403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.906628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.906671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.906819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.906844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.906967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.906993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.907166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.907200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.907401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.907435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.907603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.907638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.907801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.907827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.907947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.907973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.908119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.908154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.908295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.908329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.908472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.908497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.908654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.908698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.908843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.908878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.909056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.909091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.909232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.909258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.909351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.909376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.909512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.909547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.909647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.909692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.909868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.909893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.909985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.910010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.910118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.910153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.910328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.910363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.910536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.910561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.910684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.910729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.910867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.910902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.911070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.911105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.911274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.911304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.911417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.911443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.911604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.911639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.911812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.607 [2024-10-08 18:43:58.911848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.607 qpair failed and we were unable to recover it. 00:33:30.607 [2024-10-08 18:43:58.911993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.912019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.912173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.912199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.912346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.912381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.912522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.912556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.912696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.912722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.912882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.912907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.913033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.913067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.913237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.913272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.913416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.913441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.913592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.913637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.913798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.913824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.913978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.914013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.914181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.914206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.914355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.914400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.914541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.914576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.914752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.914787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.914915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.914941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.915032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.915057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.915221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.915255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.915401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.915436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.915553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.915579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.915733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.915760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.915916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.915950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.916096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.916137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.916285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.916311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.916398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.916423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.916565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.916600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.916758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.916793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.916934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.916959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.917109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.917134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.917313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.917348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.917458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.917493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.917638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.917670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.917765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.917791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.917944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.917979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.918114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.918149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.918400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.918425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.918675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.918711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.918855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.918890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.919058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.919092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.608 qpair failed and we were unable to recover it. 00:33:30.608 [2024-10-08 18:43:58.919256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.608 [2024-10-08 18:43:58.919282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.919431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.919477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.919617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.919661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.919772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.919807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.919921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.919946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.920067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.920093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.920249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.920284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.920421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.920456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.920638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.920682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.920856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.920881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.921036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.921071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.921220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.921255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.921419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.921445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.921562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.921588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.921764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.921790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.921958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.921993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.922128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.922153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.922281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.922306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.922444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.922479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.922648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.922692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.922800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.922825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.922948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.922974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.923082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.923116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.923285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.923320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.923458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.923487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.923609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.923634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.923796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.923830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.923967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.924002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.924138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.924164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.924277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.924302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.924451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.924486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.924587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.924621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.924772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.924798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.924916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.924941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.925067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.925102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.925271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.925305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.925475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.925500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.925619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.925644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.925809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.925844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.926014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.926049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.609 [2024-10-08 18:43:58.926189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.609 [2024-10-08 18:43:58.926215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.609 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.926333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.926358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.926513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.926548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.926688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.926724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.926849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.926874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.927034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.927060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.927249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.927283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.927435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.927469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.927599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.927645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.927835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.927861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.927991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.928026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.928161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.928201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.928350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.928375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.928524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.928570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.928719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.928745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.928896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.928922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.929090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.929115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.929237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.929263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.929494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.929528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.929691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.929726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.929872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.929898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.929987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.930012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.930155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.930189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.930290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.930325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.930449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.930474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.930592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.930617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.930778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.930813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.930921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.930955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.931124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.931150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.931296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.931342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.931515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.931550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.931717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.931753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.931893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.931919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.932070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.932112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.932259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.932294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.932436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.932471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.932608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.932633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.932764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.610 [2024-10-08 18:43:58.932789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.610 qpair failed and we were unable to recover it. 00:33:30.610 [2024-10-08 18:43:58.932942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.932978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.933123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.933157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.933267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.933292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.933442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.933467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.933619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.933660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.933804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.933839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.934019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.934045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.934166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.934211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.934381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.934416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.934558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.934593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.934770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.934796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.934945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.934991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.935159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.935194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.935340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.935375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.935508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.935560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.935668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.935716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.935840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.935865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.936016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.936051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.936224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.936249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.936365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.936391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.936568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.936603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.936748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.936783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.936968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.936994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.937113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.937158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.937292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.937327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.937437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.937472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.937640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.937699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.937853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.937878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.937983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.938018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.938185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.938219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.938363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.938388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.938512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.938537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.938714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.938750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.938883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.938917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.939083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.939108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.939258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.939303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.939474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.939509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.939708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.939744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.939953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.939979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.940072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.940117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.611 [2024-10-08 18:43:58.940389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.611 [2024-10-08 18:43:58.940424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.611 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.940566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.940606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.940751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.940776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.940898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.940923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.941070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.941104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.941240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.941275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.941451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.941486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.941639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.941681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.941796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.941821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.941955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.941990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.942137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.942162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.942296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.942321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.942500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.942535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.942670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.942706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 [2024-10-08 18:43:58.942879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.942904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912630 with addr=10.0.0.2, port=4420 00:33:30.612 qpair failed and we were unable to recover it. 00:33:30.612 A controller has encountered a failure and is being reset. 00:33:30.612 [2024-10-08 18:43:58.943239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.612 [2024-10-08 18:43:58.943308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9205f0 with addr=10.0.0.2, port=4420 00:33:30.612 [2024-10-08 18:43:58.943336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9205f0 is same with the state(6) to be set 00:33:30.612 [2024-10-08 18:43:58.943369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9205f0 (9): Bad file descriptor 00:33:30.612 [2024-10-08 18:43:58.943401] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:30.612 [2024-10-08 18:43:58.943421] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:30.612 [2024-10-08 18:43:58.943443] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:30.612 Unable to reset the controller. 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.612 Malloc0 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.612 [2024-10-08 18:43:59.064870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.612 [2024-10-08 18:43:59.093136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.612 18:43:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1341735 00:33:31.545 Controller properly reset. 00:33:36.806 Initializing NVMe Controllers 00:33:36.806 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:36.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:36.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:36.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:36.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:36.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:36.806 Initialization complete. Launching workers. 00:33:36.806 Starting thread on core 1 00:33:36.806 Starting thread on core 2 00:33:36.806 Starting thread on core 3 00:33:36.806 Starting thread on core 0 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:36.806 00:33:36.806 real 0m11.085s 00:33:36.806 user 0m33.779s 00:33:36.806 sys 0m7.998s 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:36.806 ************************************ 00:33:36.806 END TEST nvmf_target_disconnect_tc2 00:33:36.806 ************************************ 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:36.806 rmmod nvme_tcp 00:33:36.806 rmmod nvme_fabrics 00:33:36.806 rmmod nvme_keyring 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 1342255 ']' 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 1342255 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1342255 ']' 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1342255 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:36.806 18:44:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1342255 00:33:36.806 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:33:36.806 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:33:36.806 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1342255' 00:33:36.806 killing process with pid 1342255 00:33:36.806 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1342255 00:33:36.806 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1342255 00:33:37.066 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:37.066 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:37.066 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:37.066 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:33:37.066 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:33:37.066 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:37.066 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:33:37.066 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:37.066 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:37.066 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.066 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.066 18:44:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.972 18:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:39.232 00:33:39.232 real 0m17.620s 00:33:39.233 user 1m0.190s 00:33:39.233 sys 0m11.480s 00:33:39.233 18:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:39.233 18:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:39.233 ************************************ 00:33:39.233 END TEST nvmf_target_disconnect 00:33:39.233 ************************************ 00:33:39.233 18:44:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:39.233 00:33:39.233 real 6m28.493s 00:33:39.233 user 13m47.483s 00:33:39.233 sys 1m40.780s 00:33:39.233 18:44:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:39.233 18:44:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.233 ************************************ 00:33:39.233 END TEST nvmf_host 00:33:39.233 ************************************ 00:33:39.233 18:44:07 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:33:39.233 18:44:07 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:33:39.233 18:44:07 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:39.233 18:44:07 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:39.233 18:44:07 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:39.233 18:44:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.233 ************************************ 00:33:39.233 START TEST nvmf_target_core_interrupt_mode 00:33:39.233 ************************************ 00:33:39.233 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:39.233 * Looking for test storage... 00:33:39.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:33:39.233 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:39.233 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:33:39.233 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:39.492 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:39.492 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:39.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.493 --rc genhtml_branch_coverage=1 00:33:39.493 --rc genhtml_function_coverage=1 00:33:39.493 --rc genhtml_legend=1 00:33:39.493 --rc geninfo_all_blocks=1 00:33:39.493 --rc geninfo_unexecuted_blocks=1 00:33:39.493 00:33:39.493 ' 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:39.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.493 --rc genhtml_branch_coverage=1 00:33:39.493 --rc genhtml_function_coverage=1 00:33:39.493 --rc genhtml_legend=1 00:33:39.493 --rc geninfo_all_blocks=1 00:33:39.493 --rc geninfo_unexecuted_blocks=1 00:33:39.493 00:33:39.493 ' 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:39.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.493 --rc genhtml_branch_coverage=1 00:33:39.493 --rc genhtml_function_coverage=1 00:33:39.493 --rc genhtml_legend=1 00:33:39.493 --rc geninfo_all_blocks=1 00:33:39.493 --rc geninfo_unexecuted_blocks=1 00:33:39.493 00:33:39.493 ' 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:39.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.493 --rc genhtml_branch_coverage=1 00:33:39.493 --rc genhtml_function_coverage=1 00:33:39.493 --rc genhtml_legend=1 00:33:39.493 --rc geninfo_all_blocks=1 00:33:39.493 --rc geninfo_unexecuted_blocks=1 00:33:39.493 00:33:39.493 ' 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:39.493 18:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:39.493 ************************************ 00:33:39.493 START TEST nvmf_abort 00:33:39.493 ************************************ 00:33:39.493 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:39.754 * Looking for test storage... 00:33:39.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:33:39.754 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:39.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.755 --rc genhtml_branch_coverage=1 00:33:39.755 --rc genhtml_function_coverage=1 00:33:39.755 --rc genhtml_legend=1 00:33:39.755 --rc geninfo_all_blocks=1 00:33:39.755 --rc geninfo_unexecuted_blocks=1 00:33:39.755 00:33:39.755 ' 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:39.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.755 --rc genhtml_branch_coverage=1 00:33:39.755 --rc genhtml_function_coverage=1 00:33:39.755 --rc genhtml_legend=1 00:33:39.755 --rc geninfo_all_blocks=1 00:33:39.755 --rc geninfo_unexecuted_blocks=1 00:33:39.755 00:33:39.755 ' 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:39.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.755 --rc genhtml_branch_coverage=1 00:33:39.755 --rc genhtml_function_coverage=1 00:33:39.755 --rc genhtml_legend=1 00:33:39.755 --rc geninfo_all_blocks=1 00:33:39.755 --rc geninfo_unexecuted_blocks=1 00:33:39.755 00:33:39.755 ' 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:39.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.755 --rc genhtml_branch_coverage=1 00:33:39.755 --rc genhtml_function_coverage=1 00:33:39.755 --rc genhtml_legend=1 00:33:39.755 --rc geninfo_all_blocks=1 00:33:39.755 --rc geninfo_unexecuted_blocks=1 00:33:39.755 00:33:39.755 ' 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:33:39.755 18:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:43.080 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:43.080 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:43.080 Found net devices under 0000:84:00.0: cvl_0_0 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:43.080 Found net devices under 0000:84:00.1: cvl_0_1 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.080 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:43.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:33:43.081 00:33:43.081 --- 10.0.0.2 ping statistics --- 00:33:43.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.081 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:43.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:33:43.081 00:33:43.081 --- 10.0.0.1 ping statistics --- 00:33:43.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.081 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1345168 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1345168 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1345168 ']' 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:43.081 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:43.081 [2024-10-08 18:44:11.284057] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:43.081 [2024-10-08 18:44:11.285427] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:33:43.081 [2024-10-08 18:44:11.285492] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.081 [2024-10-08 18:44:11.395295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:43.081 [2024-10-08 18:44:11.614802] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.081 [2024-10-08 18:44:11.614918] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.081 [2024-10-08 18:44:11.614955] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:43.081 [2024-10-08 18:44:11.614985] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:43.081 [2024-10-08 18:44:11.615014] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.341 [2024-10-08 18:44:11.617100] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:43.341 [2024-10-08 18:44:11.617178] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:43.341 [2024-10-08 18:44:11.617184] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.341 [2024-10-08 18:44:11.800471] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:43.341 [2024-10-08 18:44:11.800727] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:43.341 [2024-10-08 18:44:11.800740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:43.341 [2024-10-08 18:44:11.801061] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:43.341 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:43.341 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:33:43.341 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:43.341 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:43.341 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:43.341 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:43.341 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:33:43.341 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.341 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:43.341 [2024-10-08 18:44:11.874400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:43.599 Malloc0 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:43.599 Delay0 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:43.599 [2024-10-08 18:44:11.954401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.599 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:33:43.599 [2024-10-08 18:44:12.059796] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:46.159 Initializing NVMe Controllers 00:33:46.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:46.159 controller IO queue size 128 less than required 00:33:46.159 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:33:46.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:33:46.159 Initialization complete. Launching workers. 00:33:46.159 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26916 00:33:46.159 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26973, failed to submit 66 00:33:46.159 success 26916, unsuccessful 57, failed 0 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:46.159 rmmod nvme_tcp 00:33:46.159 rmmod nvme_fabrics 00:33:46.159 rmmod nvme_keyring 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1345168 ']' 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1345168 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1345168 ']' 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1345168 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1345168 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1345168' 00:33:46.159 killing process with pid 1345168 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1345168 00:33:46.159 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1345168 00:33:46.418 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:46.418 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:46.418 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:46.418 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:33:46.418 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:33:46.418 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:46.418 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:33:46.418 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:46.418 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:46.418 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.418 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.418 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.323 18:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:48.323 00:33:48.323 real 0m8.802s 00:33:48.323 user 0m10.083s 00:33:48.323 sys 0m3.918s 00:33:48.324 18:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:48.324 18:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:48.324 ************************************ 00:33:48.324 END TEST nvmf_abort 00:33:48.324 ************************************ 00:33:48.324 18:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:48.324 18:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:48.324 18:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:48.324 18:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:48.582 ************************************ 00:33:48.582 START TEST nvmf_ns_hotplug_stress 00:33:48.582 ************************************ 00:33:48.582 18:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:48.582 * Looking for test storage... 00:33:48.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:48.582 18:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:48.582 18:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:33:48.582 18:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.842 --rc genhtml_branch_coverage=1 00:33:48.842 --rc genhtml_function_coverage=1 00:33:48.842 --rc genhtml_legend=1 00:33:48.842 --rc geninfo_all_blocks=1 00:33:48.842 --rc geninfo_unexecuted_blocks=1 00:33:48.842 00:33:48.842 ' 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.842 --rc genhtml_branch_coverage=1 00:33:48.842 --rc genhtml_function_coverage=1 00:33:48.842 --rc genhtml_legend=1 00:33:48.842 --rc geninfo_all_blocks=1 00:33:48.842 --rc geninfo_unexecuted_blocks=1 00:33:48.842 00:33:48.842 ' 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.842 --rc genhtml_branch_coverage=1 00:33:48.842 --rc genhtml_function_coverage=1 00:33:48.842 --rc genhtml_legend=1 00:33:48.842 --rc geninfo_all_blocks=1 00:33:48.842 --rc geninfo_unexecuted_blocks=1 00:33:48.842 00:33:48.842 ' 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.842 --rc genhtml_branch_coverage=1 00:33:48.842 --rc genhtml_function_coverage=1 00:33:48.842 --rc genhtml_legend=1 00:33:48.842 --rc geninfo_all_blocks=1 00:33:48.842 --rc geninfo_unexecuted_blocks=1 00:33:48.842 00:33:48.842 ' 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.842 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:33:48.843 18:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:51.382 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.382 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:33:51.382 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:51.382 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:33:51.383 Found 0000:84:00.0 (0x8086 - 0x159b) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:33:51.383 Found 0000:84:00.1 (0x8086 - 0x159b) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:33:51.383 Found net devices under 0000:84:00.0: cvl_0_0 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:33:51.383 Found net devices under 0000:84:00.1: cvl_0_1 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:51.383 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:51.643 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:51.643 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:51.643 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:51.643 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:51.643 18:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:51.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:51.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:33:51.643 00:33:51.643 --- 10.0.0.2 ping statistics --- 00:33:51.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.643 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:51.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:51.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:33:51.643 00:33:51.643 --- 10.0.0.1 ping statistics --- 00:33:51.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.643 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1347583 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1347583 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1347583 ']' 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:51.643 18:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:51.901 [2024-10-08 18:44:20.183706] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:51.901 [2024-10-08 18:44:20.185009] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:33:51.901 [2024-10-08 18:44:20.185079] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:51.901 [2024-10-08 18:44:20.296983] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:52.160 [2024-10-08 18:44:20.506587] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.160 [2024-10-08 18:44:20.506727] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.160 [2024-10-08 18:44:20.506766] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.160 [2024-10-08 18:44:20.506797] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.160 [2024-10-08 18:44:20.506823] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.160 [2024-10-08 18:44:20.508904] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:52.160 [2024-10-08 18:44:20.509007] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:52.160 [2024-10-08 18:44:20.509014] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.160 [2024-10-08 18:44:20.692485] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:52.160 [2024-10-08 18:44:20.692745] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:52.160 [2024-10-08 18:44:20.692769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:52.160 [2024-10-08 18:44:20.693092] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:53.099 18:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:53.099 18:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:33:53.099 18:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:53.099 18:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:53.099 18:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:53.099 18:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.099 18:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:33:53.099 18:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:53.668 [2024-10-08 18:44:21.990241] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.668 18:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:54.238 18:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:55.176 [2024-10-08 18:44:23.398892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.176 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:55.743 18:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:33:56.311 Malloc0 00:33:56.311 18:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:57.251 Delay0 00:33:57.251 18:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:57.819 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:33:58.389 NULL1 00:33:58.389 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:33:58.957 18:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1348398 00:33:58.957 18:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:33:58.957 18:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:58.957 18:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:33:59.890 Read completed with error (sct=0, sc=11) 00:33:59.890 18:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:59.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:59.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:00.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:00.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:00.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:00.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:00.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:00.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:00.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:00.406 18:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:34:00.406 18:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:34:00.664 true 00:34:00.664 18:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:00.664 18:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:01.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.230 18:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:01.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.489 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.748 18:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:34:01.749 18:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:34:02.315 true 00:34:02.315 18:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:02.315 18:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:02.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:02.882 18:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:02.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:02.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:03.140 18:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:34:03.140 18:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:34:03.398 true 00:34:03.398 18:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:03.398 18:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:03.965 18:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:04.223 18:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:34:04.223 18:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:34:04.480 true 00:34:04.480 18:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:04.480 18:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:04.739 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:05.305 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:34:05.305 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:34:05.563 true 00:34:05.563 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:05.563 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:05.820 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:06.078 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:34:06.078 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:34:06.644 true 00:34:06.644 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:06.644 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:06.902 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:07.159 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:34:07.159 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:34:07.417 true 00:34:07.417 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:07.417 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:07.983 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:08.247 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:34:08.247 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:34:08.505 true 00:34:08.505 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:08.505 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:09.070 18:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:09.328 18:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:34:09.328 18:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:34:09.587 true 00:34:09.587 18:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:09.587 18:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:10.153 18:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:10.410 18:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:34:10.410 18:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:34:10.667 true 00:34:10.667 18:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:10.667 18:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:10.924 18:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:11.182 18:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:34:11.182 18:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:34:11.749 true 00:34:11.749 18:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:11.749 18:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:12.006 18:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:12.264 18:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:34:12.264 18:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:34:12.522 true 00:34:12.522 18:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:12.522 18:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:13.089 18:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:13.655 18:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:34:13.655 18:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:34:13.913 true 00:34:13.913 18:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:13.913 18:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:14.171 18:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:14.428 18:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:34:14.428 18:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:34:14.711 true 00:34:14.711 18:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:14.711 18:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:14.995 18:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:15.561 18:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:34:15.561 18:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:34:15.819 true 00:34:15.819 18:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:15.819 18:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:16.385 18:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:16.642 18:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:34:16.642 18:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:34:16.901 true 00:34:16.901 18:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:16.901 18:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:18.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.801 18:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:18.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.801 18:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:34:18.801 18:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:34:19.059 true 00:34:19.059 18:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:19.059 18:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:19.993 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:19.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:19.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:20.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:20.250 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:34:20.250 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:34:20.507 true 00:34:20.507 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:20.508 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:21.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:21.440 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:21.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:21.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:21.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:21.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:21.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:21.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:21.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:21.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:21.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:21.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:21.699 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:34:21.699 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:34:22.267 true 00:34:22.267 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:22.267 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:22.834 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:23.093 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:34:23.093 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:34:23.659 true 00:34:23.659 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:23.659 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:25.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.034 18:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:25.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:25.034 18:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:34:25.034 18:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:34:25.599 true 00:34:25.599 18:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:25.599 18:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:26.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:26.165 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:26.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:26.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:26.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:26.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:26.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:26.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:26.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:26.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:26.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:26.422 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:34:26.422 18:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:34:26.680 true 00:34:26.680 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:26.680 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:27.615 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:27.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:27.873 18:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:34:27.873 18:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:34:28.131 true 00:34:28.131 18:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:28.131 18:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:29.070 18:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:29.070 Initializing NVMe Controllers 00:34:29.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:29.070 Controller IO queue size 128, less than required. 00:34:29.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:29.070 Controller IO queue size 128, less than required. 00:34:29.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:29.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:29.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:29.070 Initialization complete. Launching workers. 00:34:29.070 ======================================================== 00:34:29.070 Latency(us) 00:34:29.070 Device Information : IOPS MiB/s Average min max 00:34:29.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2712.00 1.32 19474.85 2661.41 1201956.68 00:34:29.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8686.85 4.24 14733.87 1568.15 362193.90 00:34:29.070 ======================================================== 00:34:29.070 Total : 11398.86 5.57 15861.84 1568.15 1201956.68 00:34:29.070 00:34:29.328 18:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:34:29.328 18:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:34:29.902 true 00:34:29.902 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1348398 00:34:29.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1348398) - No such process 00:34:29.902 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1348398 00:34:29.902 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:30.473 18:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:30.732 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:34:30.732 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:34:30.732 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:34:30.732 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:30.732 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:34:31.667 null0 00:34:31.667 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:31.667 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:31.667 18:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:34:32.232 null1 00:34:32.232 18:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:32.232 18:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:32.233 18:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:34:32.799 null2 00:34:32.799 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:32.799 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:32.799 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:34:33.058 null3 00:34:33.058 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:33.058 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:33.058 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:34:33.625 null4 00:34:33.625 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:33.625 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:33.625 18:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:34:33.883 null5 00:34:33.883 18:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:33.883 18:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:33.883 18:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:34:34.450 null6 00:34:34.450 18:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:34.450 18:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:34.450 18:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:34:35.388 null7 00:34:35.388 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:35.388 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:35.388 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:34:35.388 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:35.388 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:35.388 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:34:35.388 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:35.388 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:34:35.388 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:35.388 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:35.388 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.388 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:35.388 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1352758 1352760 1352763 1352765 1352767 1352769 1352771 1352773 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:35.389 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:35.648 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:35.648 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:35.648 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:35.648 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:35.648 18:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.909 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:36.477 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:36.477 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:36.478 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:36.478 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:36.478 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:36.478 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:36.478 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:36.478 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.736 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:36.995 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:36.995 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:36.995 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:36.995 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:36.995 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:36.995 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:37.254 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:37.254 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:37.254 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:37.254 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:37.254 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:37.254 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:37.254 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:37.254 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:37.512 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:37.770 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:37.770 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:37.770 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:37.770 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:37.770 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:37.770 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:37.770 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:37.770 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.029 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:38.321 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.321 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.321 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:38.321 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.321 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.321 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:38.321 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:38.322 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:38.322 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:38.322 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:38.322 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:38.322 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:38.602 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:38.602 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:38.602 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.602 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.602 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:38.602 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.602 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.603 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:38.603 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.603 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.603 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:38.603 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.603 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.603 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:38.603 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.603 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.603 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:38.603 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.603 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.603 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:38.860 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.860 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.860 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:38.860 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:38.860 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:38.860 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:38.860 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:38.860 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:38.860 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:38.860 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:39.118 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:39.118 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:39.118 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:39.118 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.377 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:39.635 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.635 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.635 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:39.635 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.635 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.635 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:39.635 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:39.635 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:39.635 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:39.635 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:39.635 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:39.635 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:39.893 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:40.151 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:40.151 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:40.151 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:40.151 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:40.151 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:40.151 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:40.151 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:40.151 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:40.151 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:40.151 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:40.151 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:40.151 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:40.409 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:40.409 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:40.409 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:40.409 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:40.409 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:40.409 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:40.409 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:40.409 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:40.668 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:40.668 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:40.668 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:40.668 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:40.668 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:40.668 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:40.668 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:40.668 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:40.668 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:40.668 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:40.668 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:40.668 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:40.668 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:40.668 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:40.668 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:40.668 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:40.668 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:40.668 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:40.925 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:40.925 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:40.926 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:40.926 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:40.926 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:40.926 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:41.183 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:41.183 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:41.183 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.183 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.184 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:41.442 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.442 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.442 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:41.442 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.442 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.442 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:41.442 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:41.442 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:41.442 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:41.442 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:41.442 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:41.442 18:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:41.700 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:41.700 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:41.700 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.700 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.700 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.700 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.700 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.700 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.700 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.700 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.700 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.700 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:41.959 rmmod nvme_tcp 00:34:41.959 rmmod nvme_fabrics 00:34:41.959 rmmod nvme_keyring 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1347583 ']' 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1347583 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1347583 ']' 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1347583 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:34:41.959 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:41.960 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1347583 00:34:42.219 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:42.219 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:42.219 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1347583' 00:34:42.219 killing process with pid 1347583 00:34:42.219 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1347583 00:34:42.219 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1347583 00:34:42.479 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:42.479 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:42.479 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:42.479 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:34:42.479 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:34:42.479 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:42.479 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:34:42.479 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:42.479 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:42.479 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.479 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:42.479 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:45.020 00:34:45.020 real 0m56.139s 00:34:45.020 user 3m41.582s 00:34:45.020 sys 0m24.561s 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:45.020 ************************************ 00:34:45.020 END TEST nvmf_ns_hotplug_stress 00:34:45.020 ************************************ 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:45.020 ************************************ 00:34:45.020 START TEST nvmf_delete_subsystem 00:34:45.020 ************************************ 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:45.020 * Looking for test storage... 00:34:45.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:45.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.020 --rc genhtml_branch_coverage=1 00:34:45.020 --rc genhtml_function_coverage=1 00:34:45.020 --rc genhtml_legend=1 00:34:45.020 --rc geninfo_all_blocks=1 00:34:45.020 --rc geninfo_unexecuted_blocks=1 00:34:45.020 00:34:45.020 ' 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:45.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.020 --rc genhtml_branch_coverage=1 00:34:45.020 --rc genhtml_function_coverage=1 00:34:45.020 --rc genhtml_legend=1 00:34:45.020 --rc geninfo_all_blocks=1 00:34:45.020 --rc geninfo_unexecuted_blocks=1 00:34:45.020 00:34:45.020 ' 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:45.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.020 --rc genhtml_branch_coverage=1 00:34:45.020 --rc genhtml_function_coverage=1 00:34:45.020 --rc genhtml_legend=1 00:34:45.020 --rc geninfo_all_blocks=1 00:34:45.020 --rc geninfo_unexecuted_blocks=1 00:34:45.020 00:34:45.020 ' 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:45.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.020 --rc genhtml_branch_coverage=1 00:34:45.020 --rc genhtml_function_coverage=1 00:34:45.020 --rc genhtml_legend=1 00:34:45.020 --rc geninfo_all_blocks=1 00:34:45.020 --rc geninfo_unexecuted_blocks=1 00:34:45.020 00:34:45.020 ' 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:45.020 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:34:45.021 18:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:48.314 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:34:48.315 Found 0000:84:00.0 (0x8086 - 0x159b) 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:34:48.315 Found 0000:84:00.1 (0x8086 - 0x159b) 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:34:48.315 Found net devices under 0000:84:00.0: cvl_0_0 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:34:48.315 Found net devices under 0000:84:00.1: cvl_0_1 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:48.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:48.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:34:48.315 00:34:48.315 --- 10.0.0.2 ping statistics --- 00:34:48.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.315 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:48.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:48.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:34:48.315 00:34:48.315 --- 10.0.0.1 ping statistics --- 00:34:48.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.315 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1356239 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1356239 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1356239 ']' 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:48.315 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:48.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:48.316 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:48.316 18:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:48.316 [2024-10-08 18:45:16.583712] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:48.316 [2024-10-08 18:45:16.586488] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:34:48.316 [2024-10-08 18:45:16.586611] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:48.316 [2024-10-08 18:45:16.746434] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:48.574 [2024-10-08 18:45:16.929325] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:48.574 [2024-10-08 18:45:16.929389] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:48.574 [2024-10-08 18:45:16.929406] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:48.574 [2024-10-08 18:45:16.929420] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:48.574 [2024-10-08 18:45:16.929431] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:48.574 [2024-10-08 18:45:16.930376] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.574 [2024-10-08 18:45:16.930383] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.574 [2024-10-08 18:45:17.034015] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:48.574 [2024-10-08 18:45:17.034052] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:48.574 [2024-10-08 18:45:17.034340] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:48.574 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:48.575 [2024-10-08 18:45:17.091084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.575 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:48.833 [2024-10-08 18:45:17.115337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:48.833 NULL1 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:48.833 Delay0 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1356325 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:34:48.833 18:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:48.833 [2024-10-08 18:45:17.194956] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:50.729 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:50.729 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.729 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 [2024-10-08 18:45:19.404914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c84390 is same with the state(6) to be set 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 starting I/O failed: -6 00:34:50.987 [2024-10-08 18:45:19.405665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f95b400d480 is same with the state(6) to be set 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Write completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:50.987 Read completed with error (sct=0, sc=8) 00:34:51.922 [2024-10-08 18:45:20.373571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c85a70 is same with the state(6) to be set 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 [2024-10-08 18:45:20.403921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f95b400cff0 is same with the state(6) to be set 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 [2024-10-08 18:45:20.404129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f95b400d7b0 is same with the state(6) to be set 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 [2024-10-08 18:45:20.405770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c84570 is same with the state(6) to be set 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Write completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.922 Read completed with error (sct=0, sc=8) 00:34:51.923 Read completed with error (sct=0, sc=8) 00:34:51.923 Read completed with error (sct=0, sc=8) 00:34:51.923 Read completed with error (sct=0, sc=8) 00:34:51.923 Write completed with error (sct=0, sc=8) 00:34:51.923 Read completed with error (sct=0, sc=8) 00:34:51.923 Read completed with error (sct=0, sc=8) 00:34:51.923 Read completed with error (sct=0, sc=8) 00:34:51.923 [2024-10-08 18:45:20.406699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c84930 is same with the state(6) to be set 00:34:51.923 Initializing NVMe Controllers 00:34:51.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:51.923 Controller IO queue size 128, less than required. 00:34:51.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:51.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:51.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:51.923 Initialization complete. Launching workers. 00:34:51.923 ======================================================== 00:34:51.923 Latency(us) 00:34:51.923 Device Information : IOPS MiB/s Average min max 00:34:51.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 157.18 0.08 925116.32 619.20 1045286.61 00:34:51.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.64 0.08 911418.00 449.72 1013313.26 00:34:51.923 ======================================================== 00:34:51.923 Total : 319.82 0.16 918150.36 449.72 1045286.61 00:34:51.923 00:34:51.923 [2024-10-08 18:45:20.407424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c85a70 (9): Bad file descriptor 00:34:51.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:34:51.923 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.923 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:34:51.923 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1356325 00:34:51.923 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1356325 00:34:52.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1356325) - No such process 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1356325 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1356325 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1356325 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:52.496 [2024-10-08 18:45:20.935743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1356724 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356724 00:34:52.496 18:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:52.496 [2024-10-08 18:45:21.011549] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:53.066 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:53.066 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356724 00:34:53.066 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:53.634 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:53.634 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356724 00:34:53.634 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:54.204 18:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:54.204 18:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356724 00:34:54.204 18:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:54.464 18:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:54.464 18:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356724 00:34:54.464 18:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:55.034 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:55.034 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356724 00:34:55.034 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:55.605 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:55.605 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356724 00:34:55.605 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:55.864 Initializing NVMe Controllers 00:34:55.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:55.864 Controller IO queue size 128, less than required. 00:34:55.864 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:55.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:55.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:55.864 Initialization complete. Launching workers. 00:34:55.864 ======================================================== 00:34:55.864 Latency(us) 00:34:55.864 Device Information : IOPS MiB/s Average min max 00:34:55.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004788.40 1000216.34 1041500.43 00:34:55.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005256.76 1000169.91 1040803.59 00:34:55.864 ======================================================== 00:34:55.864 Total : 256.00 0.12 1005022.58 1000169.91 1041500.43 00:34:55.864 00:34:56.123 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:56.123 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1356724 00:34:56.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1356724) - No such process 00:34:56.123 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1356724 00:34:56.123 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:34:56.123 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:34:56.123 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:56.123 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:34:56.123 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:56.123 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:56.124 rmmod nvme_tcp 00:34:56.124 rmmod nvme_fabrics 00:34:56.124 rmmod nvme_keyring 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1356239 ']' 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1356239 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1356239 ']' 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1356239 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1356239 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1356239' 00:34:56.124 killing process with pid 1356239 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1356239 00:34:56.124 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1356239 00:34:56.693 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:56.693 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:56.693 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:56.693 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:34:56.693 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:34:56.693 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:56.693 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:34:56.693 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:56.693 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:56.693 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.693 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:56.693 18:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:58.602 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:58.602 00:34:58.602 real 0m14.025s 00:34:58.602 user 0m25.937s 00:34:58.602 sys 0m4.733s 00:34:58.602 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:58.602 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:58.602 ************************************ 00:34:58.602 END TEST nvmf_delete_subsystem 00:34:58.602 ************************************ 00:34:58.862 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:58.862 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:58.862 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:58.862 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:58.862 ************************************ 00:34:58.862 START TEST nvmf_host_management 00:34:58.862 ************************************ 00:34:58.862 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:58.862 * Looking for test storage... 00:34:58.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:58.862 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:58.862 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:34:58.862 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:59.122 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:59.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.123 --rc genhtml_branch_coverage=1 00:34:59.123 --rc genhtml_function_coverage=1 00:34:59.123 --rc genhtml_legend=1 00:34:59.123 --rc geninfo_all_blocks=1 00:34:59.123 --rc geninfo_unexecuted_blocks=1 00:34:59.123 00:34:59.123 ' 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:59.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.123 --rc genhtml_branch_coverage=1 00:34:59.123 --rc genhtml_function_coverage=1 00:34:59.123 --rc genhtml_legend=1 00:34:59.123 --rc geninfo_all_blocks=1 00:34:59.123 --rc geninfo_unexecuted_blocks=1 00:34:59.123 00:34:59.123 ' 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:59.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.123 --rc genhtml_branch_coverage=1 00:34:59.123 --rc genhtml_function_coverage=1 00:34:59.123 --rc genhtml_legend=1 00:34:59.123 --rc geninfo_all_blocks=1 00:34:59.123 --rc geninfo_unexecuted_blocks=1 00:34:59.123 00:34:59.123 ' 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:59.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.123 --rc genhtml_branch_coverage=1 00:34:59.123 --rc genhtml_function_coverage=1 00:34:59.123 --rc genhtml_legend=1 00:34:59.123 --rc geninfo_all_blocks=1 00:34:59.123 --rc geninfo_unexecuted_blocks=1 00:34:59.123 00:34:59.123 ' 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:34:59.123 18:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:02.417 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:02.417 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:02.417 Found net devices under 0000:84:00.0: cvl_0_0 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:02.417 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:02.418 Found net devices under 0000:84:00.1: cvl_0_1 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:02.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:02.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:35:02.418 00:35:02.418 --- 10.0.0.2 ping statistics --- 00:35:02.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.418 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:02.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:02.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:35:02.418 00:35:02.418 --- 10.0.0.1 ping statistics --- 00:35:02.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.418 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1359204 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1359204 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1359204 ']' 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.418 [2024-10-08 18:45:30.516158] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:02.418 [2024-10-08 18:45:30.517416] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:35:02.418 [2024-10-08 18:45:30.517481] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.418 [2024-10-08 18:45:30.600147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:02.418 [2024-10-08 18:45:30.739110] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:02.418 [2024-10-08 18:45:30.739185] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:02.418 [2024-10-08 18:45:30.739206] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:02.418 [2024-10-08 18:45:30.739222] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:02.418 [2024-10-08 18:45:30.739237] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:02.418 [2024-10-08 18:45:30.741417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:35:02.418 [2024-10-08 18:45:30.741496] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:35:02.418 [2024-10-08 18:45:30.741548] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:35:02.418 [2024-10-08 18:45:30.741552] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.418 [2024-10-08 18:45:30.869239] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:02.418 [2024-10-08 18:45:30.869469] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:02.418 [2024-10-08 18:45:30.869854] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:02.418 [2024-10-08 18:45:30.870579] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:02.418 [2024-10-08 18:45:30.870893] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.418 [2024-10-08 18:45:30.934362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.418 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:02.679 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:35:02.679 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:35:02.679 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.679 18:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.679 Malloc0 00:35:02.679 [2024-10-08 18:45:31.006533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1359366 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1359366 /var/tmp/bdevperf.sock 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1359366 ']' 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:02.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:02.679 { 00:35:02.679 "params": { 00:35:02.679 "name": "Nvme$subsystem", 00:35:02.679 "trtype": "$TEST_TRANSPORT", 00:35:02.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:02.679 "adrfam": "ipv4", 00:35:02.679 "trsvcid": "$NVMF_PORT", 00:35:02.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:02.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:02.679 "hdgst": ${hdgst:-false}, 00:35:02.679 "ddgst": ${ddgst:-false} 00:35:02.679 }, 00:35:02.679 "method": "bdev_nvme_attach_controller" 00:35:02.679 } 00:35:02.679 EOF 00:35:02.679 )") 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:35:02.679 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:02.679 "params": { 00:35:02.679 "name": "Nvme0", 00:35:02.679 "trtype": "tcp", 00:35:02.679 "traddr": "10.0.0.2", 00:35:02.679 "adrfam": "ipv4", 00:35:02.679 "trsvcid": "4420", 00:35:02.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:02.679 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:02.679 "hdgst": false, 00:35:02.679 "ddgst": false 00:35:02.679 }, 00:35:02.679 "method": "bdev_nvme_attach_controller" 00:35:02.679 }' 00:35:02.679 [2024-10-08 18:45:31.093718] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:35:02.679 [2024-10-08 18:45:31.093809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1359366 ] 00:35:02.679 [2024-10-08 18:45:31.164370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.939 [2024-10-08 18:45:31.286357] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.199 Running I/O for 10 seconds... 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:35:03.199 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=526 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 526 -ge 100 ']' 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:03.461 [2024-10-08 18:45:31.922610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.461 [2024-10-08 18:45:31.922688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.461 [2024-10-08 18:45:31.922718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.461 [2024-10-08 18:45:31.922734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.461 [2024-10-08 18:45:31.922748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.461 [2024-10-08 18:45:31.922762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.461 [2024-10-08 18:45:31.922777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:03.461 [2024-10-08 18:45:31.922791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.461 [2024-10-08 18:45:31.922806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e6100 is same w 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.461 ith the state(6) to be set 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.461 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:03.462 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.462 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:35:03.462 [2024-10-08 18:45:31.930893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.930922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.930950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.930967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.930983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.930998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.931975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.931989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.932004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.932026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.932041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.932054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.932071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.932085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.932101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.932115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.932138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.932153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.932169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.932182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.932198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.462 [2024-10-08 18:45:31.932211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.462 [2024-10-08 18:45:31.932227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.932960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:03.463 [2024-10-08 18:45:31.932975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.463 [2024-10-08 18:45:31.933060] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaff1f0 was disconnected and freed. reset controller. 00:35:03.463 [2024-10-08 18:45:31.933108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e6100 (9): Bad file descriptor 00:35:03.463 [2024-10-08 18:45:31.934211] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:03.463 task offset: 81920 on job bdev=Nvme0n1 fails 00:35:03.463 00:35:03.463 Latency(us) 00:35:03.463 [2024-10-08T16:45:32.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.463 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:03.463 Job: Nvme0n1 ended in about 0.42 seconds with error 00:35:03.463 Verification LBA range: start 0x0 length 0x400 00:35:03.463 Nvme0n1 : 0.42 1531.37 95.71 153.14 0.00 36956.97 2548.62 34175.81 00:35:03.463 [2024-10-08T16:45:32.000Z] =================================================================================================================== 00:35:03.463 [2024-10-08T16:45:32.000Z] Total : 1531.37 95.71 153.14 0.00 36956.97 2548.62 34175.81 00:35:03.463 [2024-10-08 18:45:31.937042] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:03.463 [2024-10-08 18:45:31.988048] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:04.403 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1359366 00:35:04.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1359366) - No such process 00:35:04.403 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:35:04.403 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:35:04.403 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:35:04.403 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:35:04.403 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:35:04.403 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:35:04.403 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:04.403 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:04.403 { 00:35:04.403 "params": { 00:35:04.403 "name": "Nvme$subsystem", 00:35:04.403 "trtype": "$TEST_TRANSPORT", 00:35:04.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:04.403 "adrfam": "ipv4", 00:35:04.403 "trsvcid": "$NVMF_PORT", 00:35:04.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:04.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:04.403 "hdgst": ${hdgst:-false}, 00:35:04.403 "ddgst": ${ddgst:-false} 00:35:04.403 }, 00:35:04.403 "method": "bdev_nvme_attach_controller" 00:35:04.403 } 00:35:04.403 EOF 00:35:04.403 )") 00:35:04.403 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:35:04.664 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:35:04.664 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:35:04.664 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:04.664 "params": { 00:35:04.664 "name": "Nvme0", 00:35:04.664 "trtype": "tcp", 00:35:04.664 "traddr": "10.0.0.2", 00:35:04.664 "adrfam": "ipv4", 00:35:04.664 "trsvcid": "4420", 00:35:04.664 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:04.664 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:04.664 "hdgst": false, 00:35:04.664 "ddgst": false 00:35:04.664 }, 00:35:04.664 "method": "bdev_nvme_attach_controller" 00:35:04.664 }' 00:35:04.664 [2024-10-08 18:45:33.000641] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:35:04.664 [2024-10-08 18:45:33.000762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1359525 ] 00:35:04.664 [2024-10-08 18:45:33.081102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.664 [2024-10-08 18:45:33.195761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.234 Running I/O for 1 seconds... 00:35:06.172 1600.00 IOPS, 100.00 MiB/s 00:35:06.172 Latency(us) 00:35:06.172 [2024-10-08T16:45:34.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.172 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:06.172 Verification LBA range: start 0x0 length 0x400 00:35:06.172 Nvme0n1 : 1.03 1612.01 100.75 0.00 0.00 38939.81 5461.33 40777.96 00:35:06.172 [2024-10-08T16:45:34.709Z] =================================================================================================================== 00:35:06.172 [2024-10-08T16:45:34.709Z] Total : 1612.01 100.75 0.00 0.00 38939.81 5461.33 40777.96 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:06.431 rmmod nvme_tcp 00:35:06.431 rmmod nvme_fabrics 00:35:06.431 rmmod nvme_keyring 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1359204 ']' 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1359204 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1359204 ']' 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1359204 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1359204 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1359204' 00:35:06.431 killing process with pid 1359204 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1359204 00:35:06.431 18:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1359204 00:35:07.001 [2024-10-08 18:45:35.332777] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:35:07.001 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:07.002 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:07.002 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:07.002 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:35:07.002 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:35:07.002 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:07.002 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:35:07.002 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:07.002 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:07.002 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.002 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:07.002 18:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.947 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:08.947 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:35:08.947 00:35:08.947 real 0m10.271s 00:35:08.947 user 0m19.489s 00:35:08.947 sys 0m4.582s 00:35:08.947 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:08.947 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:08.947 ************************************ 00:35:08.947 END TEST nvmf_host_management 00:35:08.947 ************************************ 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:09.208 ************************************ 00:35:09.208 START TEST nvmf_lvol 00:35:09.208 ************************************ 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:09.208 * Looking for test storage... 00:35:09.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:35:09.208 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:09.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.469 --rc genhtml_branch_coverage=1 00:35:09.469 --rc genhtml_function_coverage=1 00:35:09.469 --rc genhtml_legend=1 00:35:09.469 --rc geninfo_all_blocks=1 00:35:09.469 --rc geninfo_unexecuted_blocks=1 00:35:09.469 00:35:09.469 ' 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:09.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.469 --rc genhtml_branch_coverage=1 00:35:09.469 --rc genhtml_function_coverage=1 00:35:09.469 --rc genhtml_legend=1 00:35:09.469 --rc geninfo_all_blocks=1 00:35:09.469 --rc geninfo_unexecuted_blocks=1 00:35:09.469 00:35:09.469 ' 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:09.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.469 --rc genhtml_branch_coverage=1 00:35:09.469 --rc genhtml_function_coverage=1 00:35:09.469 --rc genhtml_legend=1 00:35:09.469 --rc geninfo_all_blocks=1 00:35:09.469 --rc geninfo_unexecuted_blocks=1 00:35:09.469 00:35:09.469 ' 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:09.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.469 --rc genhtml_branch_coverage=1 00:35:09.469 --rc genhtml_function_coverage=1 00:35:09.469 --rc genhtml_legend=1 00:35:09.469 --rc geninfo_all_blocks=1 00:35:09.469 --rc geninfo_unexecuted_blocks=1 00:35:09.469 00:35:09.469 ' 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.469 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:35:09.470 18:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:12.002 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:12.002 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:35:12.002 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:12.002 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:12.002 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:12.002 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:12.002 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:12.002 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:12.003 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:12.003 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:12.003 Found net devices under 0000:84:00.0: cvl_0_0 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:12.003 Found net devices under 0000:84:00.1: cvl_0_1 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:12.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:12.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:35:12.003 00:35:12.003 --- 10.0.0.2 ping statistics --- 00:35:12.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.003 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:12.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:12.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:35:12.003 00:35:12.003 --- 10.0.0.1 ping statistics --- 00:35:12.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.003 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:12.003 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1361859 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1361859 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1361859 ']' 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:12.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:12.004 18:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:12.264 [2024-10-08 18:45:40.559004] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:12.264 [2024-10-08 18:45:40.560292] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:35:12.264 [2024-10-08 18:45:40.560361] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:12.264 [2024-10-08 18:45:40.676400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:12.523 [2024-10-08 18:45:40.866016] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:12.523 [2024-10-08 18:45:40.866137] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:12.523 [2024-10-08 18:45:40.866176] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:12.523 [2024-10-08 18:45:40.866208] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:12.523 [2024-10-08 18:45:40.866235] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:12.523 [2024-10-08 18:45:40.868219] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.523 [2024-10-08 18:45:40.868291] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:35:12.523 [2024-10-08 18:45:40.868301] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.523 [2024-10-08 18:45:41.056440] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:12.523 [2024-10-08 18:45:41.056997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:12.523 [2024-10-08 18:45:41.057010] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:12.523 [2024-10-08 18:45:41.057679] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:12.782 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:12.782 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:35:12.782 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:12.783 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:12.783 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:12.783 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:12.783 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:13.044 [2024-10-08 18:45:41.517809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:13.044 18:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:13.984 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:35:13.984 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:14.553 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:35:14.553 18:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:35:15.492 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:35:15.750 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a3ff376d-5aaf-45df-94bd-3275948d39f6 00:35:15.750 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a3ff376d-5aaf-45df-94bd-3275948d39f6 lvol 20 00:35:16.009 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bb21719c-cdec-4af6-a2ec-4235abaa86a5 00:35:16.009 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:16.574 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bb21719c-cdec-4af6-a2ec-4235abaa86a5 00:35:17.142 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:17.400 [2024-10-08 18:45:45.729906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.400 18:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:17.659 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1362484 00:35:17.659 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:35:17.659 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:35:18.592 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bb21719c-cdec-4af6-a2ec-4235abaa86a5 MY_SNAPSHOT 00:35:19.158 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=23a17f35-bdfa-4901-aeeb-1ed9d2fca4ce 00:35:19.158 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bb21719c-cdec-4af6-a2ec-4235abaa86a5 30 00:35:19.416 18:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 23a17f35-bdfa-4901-aeeb-1ed9d2fca4ce MY_CLONE 00:35:20.048 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1ebb34a9-348a-49d5-9fbf-0f2f4817ee56 00:35:20.048 18:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1ebb34a9-348a-49d5-9fbf-0f2f4817ee56 00:35:20.622 18:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1362484 00:35:28.732 Initializing NVMe Controllers 00:35:28.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:28.733 Controller IO queue size 128, less than required. 00:35:28.733 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:28.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:35:28.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:35:28.733 Initialization complete. Launching workers. 00:35:28.733 ======================================================== 00:35:28.733 Latency(us) 00:35:28.733 Device Information : IOPS MiB/s Average min max 00:35:28.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10508.62 41.05 12185.85 6367.16 65539.41 00:35:28.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10375.23 40.53 12340.21 4561.39 66480.12 00:35:28.733 ======================================================== 00:35:28.733 Total : 20883.85 81.58 12262.54 4561.39 66480.12 00:35:28.733 00:35:28.733 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:28.733 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bb21719c-cdec-4af6-a2ec-4235abaa86a5 00:35:29.298 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a3ff376d-5aaf-45df-94bd-3275948d39f6 00:35:29.557 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:35:29.557 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:35:29.557 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:35:29.557 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:29.557 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:35:29.557 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:29.557 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:35:29.557 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:29.557 18:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:29.557 rmmod nvme_tcp 00:35:29.557 rmmod nvme_fabrics 00:35:29.557 rmmod nvme_keyring 00:35:29.557 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:29.557 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:35:29.558 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:35:29.558 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1361859 ']' 00:35:29.558 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1361859 00:35:29.558 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1361859 ']' 00:35:29.558 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1361859 00:35:29.558 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:35:29.558 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:29.558 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1361859 00:35:29.558 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:29.558 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:29.558 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1361859' 00:35:29.558 killing process with pid 1361859 00:35:29.558 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1361859 00:35:29.558 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1361859 00:35:30.126 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:30.126 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:30.126 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:30.126 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:35:30.126 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:35:30.126 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:30.126 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:35:30.126 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:30.126 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:30.126 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.126 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:30.126 18:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:32.663 00:35:32.663 real 0m23.076s 00:35:32.663 user 1m2.308s 00:35:32.663 sys 0m9.154s 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:32.663 ************************************ 00:35:32.663 END TEST nvmf_lvol 00:35:32.663 ************************************ 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:32.663 ************************************ 00:35:32.663 START TEST nvmf_lvs_grow 00:35:32.663 ************************************ 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:35:32.663 * Looking for test storage... 00:35:32.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:35:32.663 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:32.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.664 --rc genhtml_branch_coverage=1 00:35:32.664 --rc genhtml_function_coverage=1 00:35:32.664 --rc genhtml_legend=1 00:35:32.664 --rc geninfo_all_blocks=1 00:35:32.664 --rc geninfo_unexecuted_blocks=1 00:35:32.664 00:35:32.664 ' 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:32.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.664 --rc genhtml_branch_coverage=1 00:35:32.664 --rc genhtml_function_coverage=1 00:35:32.664 --rc genhtml_legend=1 00:35:32.664 --rc geninfo_all_blocks=1 00:35:32.664 --rc geninfo_unexecuted_blocks=1 00:35:32.664 00:35:32.664 ' 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:32.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.664 --rc genhtml_branch_coverage=1 00:35:32.664 --rc genhtml_function_coverage=1 00:35:32.664 --rc genhtml_legend=1 00:35:32.664 --rc geninfo_all_blocks=1 00:35:32.664 --rc geninfo_unexecuted_blocks=1 00:35:32.664 00:35:32.664 ' 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:32.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.664 --rc genhtml_branch_coverage=1 00:35:32.664 --rc genhtml_function_coverage=1 00:35:32.664 --rc genhtml_legend=1 00:35:32.664 --rc geninfo_all_blocks=1 00:35:32.664 --rc geninfo_unexecuted_blocks=1 00:35:32.664 00:35:32.664 ' 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:32.664 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:32.665 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:35:32.665 18:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:35.205 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:35.205 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:35:35.205 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:35.205 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:35.205 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:35.205 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:35.205 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:35.205 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:35:35.205 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:35.205 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:35:35.206 Found 0000:84:00.0 (0x8086 - 0x159b) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:35:35.206 Found 0000:84:00.1 (0x8086 - 0x159b) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:35:35.206 Found net devices under 0000:84:00.0: cvl_0_0 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:35.206 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:35:35.467 Found net devices under 0000:84:00.1: cvl_0_1 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:35.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:35.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:35:35.467 00:35:35.467 --- 10.0.0.2 ping statistics --- 00:35:35.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:35.467 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:35.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:35.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:35:35.467 00:35:35.467 --- 10.0.0.1 ping statistics --- 00:35:35.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:35.467 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1365943 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1365943 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1365943 ']' 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:35.467 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:35.727 [2024-10-08 18:46:04.033740] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:35.727 [2024-10-08 18:46:04.036507] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:35:35.727 [2024-10-08 18:46:04.036630] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.727 [2024-10-08 18:46:04.199433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.987 [2024-10-08 18:46:04.420721] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.987 [2024-10-08 18:46:04.420837] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.987 [2024-10-08 18:46:04.420875] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.988 [2024-10-08 18:46:04.420904] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.988 [2024-10-08 18:46:04.420931] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.988 [2024-10-08 18:46:04.422238] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.246 [2024-10-08 18:46:04.596730] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:36.246 [2024-10-08 18:46:04.597400] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:36.246 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:36.246 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:35:36.246 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:36.246 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:36.246 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:36.246 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.246 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:36.815 [2024-10-08 18:46:05.315519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:37.075 ************************************ 00:35:37.075 START TEST lvs_grow_clean 00:35:37.075 ************************************ 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:37.075 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:37.333 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:37.333 18:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:37.900 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6dea63e9-dd88-4d68-a3f9-fe20c2083e09 00:35:37.900 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:37.900 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea63e9-dd88-4d68-a3f9-fe20c2083e09 00:35:38.466 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:38.466 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:38.466 18:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6dea63e9-dd88-4d68-a3f9-fe20c2083e09 lvol 150 00:35:39.031 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=19c84537-ee19-4309-b3eb-475d905931a7 00:35:39.031 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:39.031 18:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:39.970 [2024-10-08 18:46:08.167134] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:39.970 [2024-10-08 18:46:08.167234] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:39.970 true 00:35:39.970 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea63e9-dd88-4d68-a3f9-fe20c2083e09 00:35:39.970 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:40.231 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:40.231 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:40.488 18:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 19c84537-ee19-4309-b3eb-475d905931a7 00:35:41.053 18:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:41.618 [2024-10-08 18:46:10.147815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.877 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:42.447 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1366764 00:35:42.447 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:42.447 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:42.447 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1366764 /var/tmp/bdevperf.sock 00:35:42.447 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1366764 ']' 00:35:42.447 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:42.447 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:42.447 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:42.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:42.447 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:42.447 18:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:35:42.447 [2024-10-08 18:46:10.953368] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:35:42.447 [2024-10-08 18:46:10.953546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366764 ] 00:35:42.705 [2024-10-08 18:46:11.104456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.963 [2024-10-08 18:46:11.330748] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.220 18:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:43.220 18:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:35:43.220 18:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:43.783 Nvme0n1 00:35:43.783 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:44.347 [ 00:35:44.347 { 00:35:44.347 "name": "Nvme0n1", 00:35:44.347 "aliases": [ 00:35:44.347 "19c84537-ee19-4309-b3eb-475d905931a7" 00:35:44.347 ], 00:35:44.347 "product_name": "NVMe disk", 00:35:44.347 "block_size": 4096, 00:35:44.347 "num_blocks": 38912, 00:35:44.347 "uuid": "19c84537-ee19-4309-b3eb-475d905931a7", 00:35:44.347 "numa_id": 1, 00:35:44.347 "assigned_rate_limits": { 00:35:44.347 "rw_ios_per_sec": 0, 00:35:44.347 "rw_mbytes_per_sec": 0, 00:35:44.347 "r_mbytes_per_sec": 0, 00:35:44.347 "w_mbytes_per_sec": 0 00:35:44.347 }, 00:35:44.347 "claimed": false, 00:35:44.347 "zoned": false, 00:35:44.347 "supported_io_types": { 00:35:44.347 "read": true, 00:35:44.347 "write": true, 00:35:44.347 "unmap": true, 00:35:44.347 "flush": true, 00:35:44.347 "reset": true, 00:35:44.347 "nvme_admin": true, 00:35:44.347 "nvme_io": true, 00:35:44.347 "nvme_io_md": false, 00:35:44.347 "write_zeroes": true, 00:35:44.347 "zcopy": false, 00:35:44.347 "get_zone_info": false, 00:35:44.347 "zone_management": false, 00:35:44.347 "zone_append": false, 00:35:44.347 "compare": true, 00:35:44.347 "compare_and_write": true, 00:35:44.347 "abort": true, 00:35:44.347 "seek_hole": false, 00:35:44.347 "seek_data": false, 00:35:44.347 "copy": true, 00:35:44.347 "nvme_iov_md": false 00:35:44.347 }, 00:35:44.347 "memory_domains": [ 00:35:44.347 { 00:35:44.347 "dma_device_id": "system", 00:35:44.347 "dma_device_type": 1 00:35:44.347 } 00:35:44.347 ], 00:35:44.347 "driver_specific": { 00:35:44.347 "nvme": [ 00:35:44.347 { 00:35:44.347 "trid": { 00:35:44.348 "trtype": "TCP", 00:35:44.348 "adrfam": "IPv4", 00:35:44.348 "traddr": "10.0.0.2", 00:35:44.348 "trsvcid": "4420", 00:35:44.348 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:44.348 }, 00:35:44.348 "ctrlr_data": { 00:35:44.348 "cntlid": 1, 00:35:44.348 "vendor_id": "0x8086", 00:35:44.348 "model_number": "SPDK bdev Controller", 00:35:44.348 "serial_number": "SPDK0", 00:35:44.348 "firmware_revision": "25.01", 00:35:44.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:44.348 "oacs": { 00:35:44.348 "security": 0, 00:35:44.348 "format": 0, 00:35:44.348 "firmware": 0, 00:35:44.348 "ns_manage": 0 00:35:44.348 }, 00:35:44.348 "multi_ctrlr": true, 00:35:44.348 "ana_reporting": false 00:35:44.348 }, 00:35:44.348 "vs": { 00:35:44.348 "nvme_version": "1.3" 00:35:44.348 }, 00:35:44.348 "ns_data": { 00:35:44.348 "id": 1, 00:35:44.348 "can_share": true 00:35:44.348 } 00:35:44.348 } 00:35:44.348 ], 00:35:44.348 "mp_policy": "active_passive" 00:35:44.348 } 00:35:44.348 } 00:35:44.348 ] 00:35:44.348 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1366913 00:35:44.348 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:44.348 18:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:44.606 Running I/O for 10 seconds... 00:35:45.542 Latency(us) 00:35:45.542 [2024-10-08T16:46:14.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:45.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:45.542 Nvme0n1 : 1.00 6033.00 23.57 0.00 0.00 0.00 0.00 0.00 00:35:45.542 [2024-10-08T16:46:14.079Z] =================================================================================================================== 00:35:45.542 [2024-10-08T16:46:14.079Z] Total : 6033.00 23.57 0.00 0.00 0.00 0.00 0.00 00:35:45.542 00:35:46.479 18:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6dea63e9-dd88-4d68-a3f9-fe20c2083e09 00:35:46.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:46.479 Nvme0n1 : 2.00 6287.00 24.56 0.00 0.00 0.00 0.00 0.00 00:35:46.479 [2024-10-08T16:46:15.016Z] =================================================================================================================== 00:35:46.479 [2024-10-08T16:46:15.016Z] Total : 6287.00 24.56 0.00 0.00 0.00 0.00 0.00 00:35:46.479 00:35:47.046 true 00:35:47.046 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea63e9-dd88-4d68-a3f9-fe20c2083e09 00:35:47.046 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:47.304 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:47.304 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:47.304 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1366913 00:35:47.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:47.563 Nvme0n1 : 3.00 6286.67 24.56 0.00 0.00 0.00 0.00 0.00 00:35:47.563 [2024-10-08T16:46:16.100Z] =================================================================================================================== 00:35:47.563 [2024-10-08T16:46:16.100Z] Total : 6286.67 24.56 0.00 0.00 0.00 0.00 0.00 00:35:47.563 00:35:48.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:48.500 Nvme0n1 : 4.00 6270.75 24.50 0.00 0.00 0.00 0.00 0.00 00:35:48.500 [2024-10-08T16:46:17.037Z] =================================================================================================================== 00:35:48.500 [2024-10-08T16:46:17.037Z] Total : 6270.75 24.50 0.00 0.00 0.00 0.00 0.00 00:35:48.500 00:35:49.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:49.436 Nvme0n1 : 5.00 6413.60 25.05 0.00 0.00 0.00 0.00 0.00 00:35:49.436 [2024-10-08T16:46:17.973Z] =================================================================================================================== 00:35:49.436 [2024-10-08T16:46:17.973Z] Total : 6413.60 25.05 0.00 0.00 0.00 0.00 0.00 00:35:49.436 00:35:50.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:50.817 Nvme0n1 : 6.00 6403.00 25.01 0.00 0.00 0.00 0.00 0.00 00:35:50.818 [2024-10-08T16:46:19.355Z] =================================================================================================================== 00:35:50.818 [2024-10-08T16:46:19.355Z] Total : 6403.00 25.01 0.00 0.00 0.00 0.00 0.00 00:35:50.818 00:35:51.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:51.754 Nvme0n1 : 7.00 6395.43 24.98 0.00 0.00 0.00 0.00 0.00 00:35:51.754 [2024-10-08T16:46:20.291Z] =================================================================================================================== 00:35:51.754 [2024-10-08T16:46:20.291Z] Total : 6395.43 24.98 0.00 0.00 0.00 0.00 0.00 00:35:51.754 00:35:52.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:52.691 Nvme0n1 : 8.00 6945.38 27.13 0.00 0.00 0.00 0.00 0.00 00:35:52.691 [2024-10-08T16:46:21.228Z] =================================================================================================================== 00:35:52.691 [2024-10-08T16:46:21.228Z] Total : 6945.38 27.13 0.00 0.00 0.00 0.00 0.00 00:35:52.691 00:35:53.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:53.628 Nvme0n1 : 9.00 6907.44 26.98 0.00 0.00 0.00 0.00 0.00 00:35:53.628 [2024-10-08T16:46:22.165Z] =================================================================================================================== 00:35:53.628 [2024-10-08T16:46:22.165Z] Total : 6907.44 26.98 0.00 0.00 0.00 0.00 0.00 00:35:53.628 00:35:54.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:54.566 Nvme0n1 : 10.00 6859.80 26.80 0.00 0.00 0.00 0.00 0.00 00:35:54.566 [2024-10-08T16:46:23.103Z] =================================================================================================================== 00:35:54.566 [2024-10-08T16:46:23.103Z] Total : 6859.80 26.80 0.00 0.00 0.00 0.00 0.00 00:35:54.566 00:35:54.566 00:35:54.566 Latency(us) 00:35:54.566 [2024-10-08T16:46:23.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:54.566 Nvme0n1 : 10.01 6857.05 26.79 0.00 0.00 18649.32 7475.96 49516.09 00:35:54.566 [2024-10-08T16:46:23.103Z] =================================================================================================================== 00:35:54.566 [2024-10-08T16:46:23.103Z] Total : 6857.05 26.79 0.00 0.00 18649.32 7475.96 49516.09 00:35:54.566 { 00:35:54.566 "results": [ 00:35:54.566 { 00:35:54.566 "job": "Nvme0n1", 00:35:54.566 "core_mask": "0x2", 00:35:54.566 "workload": "randwrite", 00:35:54.566 "status": "finished", 00:35:54.566 "queue_depth": 128, 00:35:54.566 "io_size": 4096, 00:35:54.566 "runtime": 10.010867, 00:35:54.566 "iops": 6857.048445454325, 00:35:54.566 "mibps": 26.785345490055956, 00:35:54.566 "io_failed": 0, 00:35:54.566 "io_timeout": 0, 00:35:54.566 "avg_latency_us": 18649.315505831128, 00:35:54.566 "min_latency_us": 7475.958518518519, 00:35:54.566 "max_latency_us": 49516.08888888889 00:35:54.566 } 00:35:54.566 ], 00:35:54.566 "core_count": 1 00:35:54.566 } 00:35:54.566 18:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1366764 00:35:54.566 18:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1366764 ']' 00:35:54.566 18:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1366764 00:35:54.566 18:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:35:54.566 18:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:54.566 18:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1366764 00:35:54.566 18:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:54.566 18:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:54.566 18:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1366764' 00:35:54.566 killing process with pid 1366764 00:35:54.566 18:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1366764 00:35:54.566 Received shutdown signal, test time was about 10.000000 seconds 00:35:54.566 00:35:54.566 Latency(us) 00:35:54.566 [2024-10-08T16:46:23.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.566 [2024-10-08T16:46:23.103Z] =================================================================================================================== 00:35:54.566 [2024-10-08T16:46:23.103Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:54.566 18:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1366764 00:35:55.133 18:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:55.701 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:56.268 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:56.268 18:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea63e9-dd88-4d68-a3f9-fe20c2083e09 00:35:57.205 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:57.205 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:35:57.205 18:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:57.772 [2024-10-08 18:46:26.031185] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:57.772 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea63e9-dd88-4d68-a3f9-fe20c2083e09 00:35:57.772 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:35:57.772 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea63e9-dd88-4d68-a3f9-fe20c2083e09 00:35:57.772 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:57.772 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.772 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:57.772 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.772 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:57.772 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.772 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:57.772 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:57.772 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea63e9-dd88-4d68-a3f9-fe20c2083e09 00:35:58.339 request: 00:35:58.339 { 00:35:58.339 "uuid": "6dea63e9-dd88-4d68-a3f9-fe20c2083e09", 00:35:58.339 "method": "bdev_lvol_get_lvstores", 00:35:58.339 "req_id": 1 00:35:58.339 } 00:35:58.339 Got JSON-RPC error response 00:35:58.339 response: 00:35:58.339 { 00:35:58.339 "code": -19, 00:35:58.339 "message": "No such device" 00:35:58.339 } 00:35:58.339 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:35:58.339 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:58.339 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:58.339 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:58.339 18:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:58.908 aio_bdev 00:35:58.908 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 19c84537-ee19-4309-b3eb-475d905931a7 00:35:58.908 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=19c84537-ee19-4309-b3eb-475d905931a7 00:35:58.908 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:58.908 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:35:58.908 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:58.908 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:58.908 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:59.847 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 19c84537-ee19-4309-b3eb-475d905931a7 -t 2000 00:36:00.417 [ 00:36:00.417 { 00:36:00.417 "name": "19c84537-ee19-4309-b3eb-475d905931a7", 00:36:00.417 "aliases": [ 00:36:00.417 "lvs/lvol" 00:36:00.417 ], 00:36:00.417 "product_name": "Logical Volume", 00:36:00.417 "block_size": 4096, 00:36:00.417 "num_blocks": 38912, 00:36:00.417 "uuid": "19c84537-ee19-4309-b3eb-475d905931a7", 00:36:00.417 "assigned_rate_limits": { 00:36:00.417 "rw_ios_per_sec": 0, 00:36:00.417 "rw_mbytes_per_sec": 0, 00:36:00.417 "r_mbytes_per_sec": 0, 00:36:00.417 "w_mbytes_per_sec": 0 00:36:00.417 }, 00:36:00.417 "claimed": false, 00:36:00.417 "zoned": false, 00:36:00.417 "supported_io_types": { 00:36:00.417 "read": true, 00:36:00.417 "write": true, 00:36:00.417 "unmap": true, 00:36:00.417 "flush": false, 00:36:00.417 "reset": true, 00:36:00.417 "nvme_admin": false, 00:36:00.417 "nvme_io": false, 00:36:00.417 "nvme_io_md": false, 00:36:00.417 "write_zeroes": true, 00:36:00.417 "zcopy": false, 00:36:00.417 "get_zone_info": false, 00:36:00.417 "zone_management": false, 00:36:00.417 "zone_append": false, 00:36:00.417 "compare": false, 00:36:00.417 "compare_and_write": false, 00:36:00.417 "abort": false, 00:36:00.417 "seek_hole": true, 00:36:00.417 "seek_data": true, 00:36:00.417 "copy": false, 00:36:00.417 "nvme_iov_md": false 00:36:00.417 }, 00:36:00.417 "driver_specific": { 00:36:00.417 "lvol": { 00:36:00.417 "lvol_store_uuid": "6dea63e9-dd88-4d68-a3f9-fe20c2083e09", 00:36:00.417 "base_bdev": "aio_bdev", 00:36:00.417 "thin_provision": false, 00:36:00.417 "num_allocated_clusters": 38, 00:36:00.417 "snapshot": false, 00:36:00.417 "clone": false, 00:36:00.417 "esnap_clone": false 00:36:00.417 } 00:36:00.417 } 00:36:00.417 } 00:36:00.417 ] 00:36:00.417 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:36:00.417 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea63e9-dd88-4d68-a3f9-fe20c2083e09 00:36:00.417 18:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:00.987 18:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:00.987 18:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea63e9-dd88-4d68-a3f9-fe20c2083e09 00:36:00.987 18:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:01.925 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:01.925 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 19c84537-ee19-4309-b3eb-475d905931a7 00:36:02.183 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6dea63e9-dd88-4d68-a3f9-fe20c2083e09 00:36:02.442 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:02.702 00:36:02.702 real 0m25.738s 00:36:02.702 user 0m25.385s 00:36:02.702 sys 0m2.950s 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:02.702 ************************************ 00:36:02.702 END TEST lvs_grow_clean 00:36:02.702 ************************************ 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:02.702 ************************************ 00:36:02.702 START TEST lvs_grow_dirty 00:36:02.702 ************************************ 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:02.702 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:03.640 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:36:03.640 18:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:36:04.207 18:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:04.207 18:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:04.207 18:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:36:04.776 18:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:36:04.776 18:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:36:04.776 18:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d lvol 150 00:36:05.712 18:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=92462dae-10aa-468a-9d9a-8ef785a63805 00:36:05.712 18:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:05.712 18:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:36:06.281 [2024-10-08 18:46:34.571258] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:36:06.281 [2024-10-08 18:46:34.571440] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:36:06.281 true 00:36:06.281 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:06.281 18:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:36:06.849 18:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:36:06.849 18:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:07.416 18:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 92462dae-10aa-468a-9d9a-8ef785a63805 00:36:08.353 18:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:08.922 [2024-10-08 18:46:37.151880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:08.922 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:09.491 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1369705 00:36:09.491 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:36:09.491 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:09.491 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1369705 /var/tmp/bdevperf.sock 00:36:09.491 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1369705 ']' 00:36:09.491 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:09.491 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:09.491 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:09.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:09.491 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:09.491 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:09.491 [2024-10-08 18:46:37.926126] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:36:09.491 [2024-10-08 18:46:37.926311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369705 ] 00:36:09.751 [2024-10-08 18:46:38.066293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.751 [2024-10-08 18:46:38.281825] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:10.009 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:10.009 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:36:10.009 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:36:10.947 Nvme0n1 00:36:10.947 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:36:11.515 [ 00:36:11.515 { 00:36:11.515 "name": "Nvme0n1", 00:36:11.515 "aliases": [ 00:36:11.515 "92462dae-10aa-468a-9d9a-8ef785a63805" 00:36:11.515 ], 00:36:11.515 "product_name": "NVMe disk", 00:36:11.515 "block_size": 4096, 00:36:11.515 "num_blocks": 38912, 00:36:11.515 "uuid": "92462dae-10aa-468a-9d9a-8ef785a63805", 00:36:11.515 "numa_id": 1, 00:36:11.515 "assigned_rate_limits": { 00:36:11.515 "rw_ios_per_sec": 0, 00:36:11.515 "rw_mbytes_per_sec": 0, 00:36:11.515 "r_mbytes_per_sec": 0, 00:36:11.515 "w_mbytes_per_sec": 0 00:36:11.515 }, 00:36:11.515 "claimed": false, 00:36:11.515 "zoned": false, 00:36:11.515 "supported_io_types": { 00:36:11.515 "read": true, 00:36:11.515 "write": true, 00:36:11.515 "unmap": true, 00:36:11.515 "flush": true, 00:36:11.515 "reset": true, 00:36:11.515 "nvme_admin": true, 00:36:11.515 "nvme_io": true, 00:36:11.515 "nvme_io_md": false, 00:36:11.515 "write_zeroes": true, 00:36:11.515 "zcopy": false, 00:36:11.515 "get_zone_info": false, 00:36:11.515 "zone_management": false, 00:36:11.515 "zone_append": false, 00:36:11.515 "compare": true, 00:36:11.515 "compare_and_write": true, 00:36:11.515 "abort": true, 00:36:11.515 "seek_hole": false, 00:36:11.515 "seek_data": false, 00:36:11.515 "copy": true, 00:36:11.515 "nvme_iov_md": false 00:36:11.515 }, 00:36:11.515 "memory_domains": [ 00:36:11.515 { 00:36:11.515 "dma_device_id": "system", 00:36:11.515 "dma_device_type": 1 00:36:11.515 } 00:36:11.515 ], 00:36:11.515 "driver_specific": { 00:36:11.515 "nvme": [ 00:36:11.515 { 00:36:11.515 "trid": { 00:36:11.515 "trtype": "TCP", 00:36:11.515 "adrfam": "IPv4", 00:36:11.515 "traddr": "10.0.0.2", 00:36:11.515 "trsvcid": "4420", 00:36:11.515 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:11.515 }, 00:36:11.515 "ctrlr_data": { 00:36:11.515 "cntlid": 1, 00:36:11.515 "vendor_id": "0x8086", 00:36:11.515 "model_number": "SPDK bdev Controller", 00:36:11.515 "serial_number": "SPDK0", 00:36:11.515 "firmware_revision": "25.01", 00:36:11.515 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:11.515 "oacs": { 00:36:11.515 "security": 0, 00:36:11.515 "format": 0, 00:36:11.515 "firmware": 0, 00:36:11.515 "ns_manage": 0 00:36:11.515 }, 00:36:11.515 "multi_ctrlr": true, 00:36:11.515 "ana_reporting": false 00:36:11.515 }, 00:36:11.515 "vs": { 00:36:11.515 "nvme_version": "1.3" 00:36:11.515 }, 00:36:11.516 "ns_data": { 00:36:11.516 "id": 1, 00:36:11.516 "can_share": true 00:36:11.516 } 00:36:11.516 } 00:36:11.516 ], 00:36:11.516 "mp_policy": "active_passive" 00:36:11.516 } 00:36:11.516 } 00:36:11.516 ] 00:36:11.516 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1369964 00:36:11.516 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:36:11.516 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:11.516 Running I/O for 10 seconds... 00:36:12.453 Latency(us) 00:36:12.453 [2024-10-08T16:46:40.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:12.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:12.453 Nvme0n1 : 1.00 14097.00 55.07 0.00 0.00 0.00 0.00 0.00 00:36:12.453 [2024-10-08T16:46:40.990Z] =================================================================================================================== 00:36:12.453 [2024-10-08T16:46:40.990Z] Total : 14097.00 55.07 0.00 0.00 0.00 0.00 0.00 00:36:12.453 00:36:13.406 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:13.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:13.683 Nvme0n1 : 2.00 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:36:13.683 [2024-10-08T16:46:42.220Z] =================================================================================================================== 00:36:13.683 [2024-10-08T16:46:42.220Z] Total : 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:36:13.683 00:36:13.978 true 00:36:13.978 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:13.978 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:36:14.236 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:36:14.236 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:36:14.236 18:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1369964 00:36:14.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:14.495 Nvme0n1 : 3.00 11790.00 46.05 0.00 0.00 0.00 0.00 0.00 00:36:14.495 [2024-10-08T16:46:43.032Z] =================================================================================================================== 00:36:14.495 [2024-10-08T16:46:43.032Z] Total : 11790.00 46.05 0.00 0.00 0.00 0.00 0.00 00:36:14.495 00:36:15.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:15.872 Nvme0n1 : 4.00 10867.00 42.45 0.00 0.00 0.00 0.00 0.00 00:36:15.872 [2024-10-08T16:46:44.409Z] =================================================================================================================== 00:36:15.872 [2024-10-08T16:46:44.410Z] Total : 10867.00 42.45 0.00 0.00 0.00 0.00 0.00 00:36:15.873 00:36:16.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:16.808 Nvme0n1 : 5.00 9970.40 38.95 0.00 0.00 0.00 0.00 0.00 00:36:16.808 [2024-10-08T16:46:45.345Z] =================================================================================================================== 00:36:16.808 [2024-10-08T16:46:45.345Z] Total : 9970.40 38.95 0.00 0.00 0.00 0.00 0.00 00:36:16.808 00:36:17.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:17.743 Nvme0n1 : 6.00 9393.83 36.69 0.00 0.00 0.00 0.00 0.00 00:36:17.743 [2024-10-08T16:46:46.280Z] =================================================================================================================== 00:36:17.743 [2024-10-08T16:46:46.280Z] Total : 9393.83 36.69 0.00 0.00 0.00 0.00 0.00 00:36:17.743 00:36:18.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:18.678 Nvme0n1 : 7.00 9412.57 36.77 0.00 0.00 0.00 0.00 0.00 00:36:18.678 [2024-10-08T16:46:47.215Z] =================================================================================================================== 00:36:18.678 [2024-10-08T16:46:47.215Z] Total : 9412.57 36.77 0.00 0.00 0.00 0.00 0.00 00:36:18.678 00:36:19.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:19.616 Nvme0n1 : 8.00 10014.00 39.12 0.00 0.00 0.00 0.00 0.00 00:36:19.616 [2024-10-08T16:46:48.153Z] =================================================================================================================== 00:36:19.616 [2024-10-08T16:46:48.153Z] Total : 10014.00 39.12 0.00 0.00 0.00 0.00 0.00 00:36:19.616 00:36:20.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:20.553 Nvme0n1 : 9.00 9663.33 37.75 0.00 0.00 0.00 0.00 0.00 00:36:20.553 [2024-10-08T16:46:49.090Z] =================================================================================================================== 00:36:20.553 [2024-10-08T16:46:49.090Z] Total : 9663.33 37.75 0.00 0.00 0.00 0.00 0.00 00:36:20.553 00:36:21.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:21.489 Nvme0n1 : 10.00 9363.80 36.58 0.00 0.00 0.00 0.00 0.00 00:36:21.489 [2024-10-08T16:46:50.026Z] =================================================================================================================== 00:36:21.489 [2024-10-08T16:46:50.026Z] Total : 9363.80 36.58 0.00 0.00 0.00 0.00 0.00 00:36:21.489 00:36:21.489 00:36:21.489 Latency(us) 00:36:21.489 [2024-10-08T16:46:50.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:21.489 Nvme0n1 : 10.01 9360.65 36.57 0.00 0.00 13664.50 6140.97 35729.26 00:36:21.489 [2024-10-08T16:46:50.026Z] =================================================================================================================== 00:36:21.489 [2024-10-08T16:46:50.026Z] Total : 9360.65 36.57 0.00 0.00 13664.50 6140.97 35729.26 00:36:21.489 { 00:36:21.489 "results": [ 00:36:21.489 { 00:36:21.489 "job": "Nvme0n1", 00:36:21.489 "core_mask": "0x2", 00:36:21.489 "workload": "randwrite", 00:36:21.489 "status": "finished", 00:36:21.489 "queue_depth": 128, 00:36:21.489 "io_size": 4096, 00:36:21.489 "runtime": 10.010198, 00:36:21.489 "iops": 9360.654005045655, 00:36:21.489 "mibps": 36.56505470720959, 00:36:21.489 "io_failed": 0, 00:36:21.489 "io_timeout": 0, 00:36:21.489 "avg_latency_us": 13664.501596092263, 00:36:21.489 "min_latency_us": 6140.965925925926, 00:36:21.489 "max_latency_us": 35729.2562962963 00:36:21.489 } 00:36:21.489 ], 00:36:21.489 "core_count": 1 00:36:21.489 } 00:36:21.750 18:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1369705 00:36:21.750 18:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1369705 ']' 00:36:21.750 18:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1369705 00:36:21.750 18:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:36:21.750 18:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:21.750 18:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1369705 00:36:21.750 18:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:21.750 18:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:21.750 18:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1369705' 00:36:21.750 killing process with pid 1369705 00:36:21.750 18:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1369705 00:36:21.750 Received shutdown signal, test time was about 10.000000 seconds 00:36:21.750 00:36:21.750 Latency(us) 00:36:21.750 [2024-10-08T16:46:50.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.750 [2024-10-08T16:46:50.287Z] =================================================================================================================== 00:36:21.750 [2024-10-08T16:46:50.287Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:21.750 18:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1369705 00:36:22.010 18:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:22.581 18:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:23.515 18:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:23.515 18:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1365943 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1365943 00:36:24.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1365943 Killed "${NVMF_APP[@]}" "$@" 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1371413 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1371413 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1371413 ']' 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:24.082 18:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:24.082 [2024-10-08 18:46:52.550611] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:24.082 [2024-10-08 18:46:52.552856] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:36:24.082 [2024-10-08 18:46:52.552945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:24.343 [2024-10-08 18:46:52.714509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.604 [2024-10-08 18:46:52.932028] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:24.604 [2024-10-08 18:46:52.932142] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:24.604 [2024-10-08 18:46:52.932180] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:24.604 [2024-10-08 18:46:52.932210] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:24.604 [2024-10-08 18:46:52.932236] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:24.604 [2024-10-08 18:46:52.933223] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.604 [2024-10-08 18:46:53.102108] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:24.604 [2024-10-08 18:46:53.102815] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:25.173 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:25.173 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:36:25.173 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:25.173 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:25.173 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:25.173 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:25.173 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:25.738 [2024-10-08 18:46:54.016034] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:36:25.738 [2024-10-08 18:46:54.016182] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:36:25.738 [2024-10-08 18:46:54.016231] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:36:25.738 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:36:25.738 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 92462dae-10aa-468a-9d9a-8ef785a63805 00:36:25.738 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=92462dae-10aa-468a-9d9a-8ef785a63805 00:36:25.738 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:25.738 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:36:25.738 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:25.738 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:25.738 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:25.998 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 92462dae-10aa-468a-9d9a-8ef785a63805 -t 2000 00:36:26.568 [ 00:36:26.568 { 00:36:26.568 "name": "92462dae-10aa-468a-9d9a-8ef785a63805", 00:36:26.568 "aliases": [ 00:36:26.568 "lvs/lvol" 00:36:26.568 ], 00:36:26.568 "product_name": "Logical Volume", 00:36:26.568 "block_size": 4096, 00:36:26.568 "num_blocks": 38912, 00:36:26.568 "uuid": "92462dae-10aa-468a-9d9a-8ef785a63805", 00:36:26.568 "assigned_rate_limits": { 00:36:26.568 "rw_ios_per_sec": 0, 00:36:26.568 "rw_mbytes_per_sec": 0, 00:36:26.568 "r_mbytes_per_sec": 0, 00:36:26.568 "w_mbytes_per_sec": 0 00:36:26.568 }, 00:36:26.568 "claimed": false, 00:36:26.568 "zoned": false, 00:36:26.568 "supported_io_types": { 00:36:26.568 "read": true, 00:36:26.568 "write": true, 00:36:26.568 "unmap": true, 00:36:26.568 "flush": false, 00:36:26.568 "reset": true, 00:36:26.568 "nvme_admin": false, 00:36:26.568 "nvme_io": false, 00:36:26.568 "nvme_io_md": false, 00:36:26.568 "write_zeroes": true, 00:36:26.568 "zcopy": false, 00:36:26.568 "get_zone_info": false, 00:36:26.568 "zone_management": false, 00:36:26.568 "zone_append": false, 00:36:26.568 "compare": false, 00:36:26.568 "compare_and_write": false, 00:36:26.568 "abort": false, 00:36:26.568 "seek_hole": true, 00:36:26.568 "seek_data": true, 00:36:26.568 "copy": false, 00:36:26.568 "nvme_iov_md": false 00:36:26.568 }, 00:36:26.568 "driver_specific": { 00:36:26.568 "lvol": { 00:36:26.568 "lvol_store_uuid": "f9ad4ba2-86a3-48d5-aa47-82d60732f95d", 00:36:26.568 "base_bdev": "aio_bdev", 00:36:26.568 "thin_provision": false, 00:36:26.568 "num_allocated_clusters": 38, 00:36:26.568 "snapshot": false, 00:36:26.568 "clone": false, 00:36:26.568 "esnap_clone": false 00:36:26.568 } 00:36:26.568 } 00:36:26.568 } 00:36:26.568 ] 00:36:26.568 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:36:26.568 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:26.568 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:36:27.139 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:36:27.139 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:27.139 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:36:28.080 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:36:28.080 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:28.648 [2024-10-08 18:46:56.901959] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:36:28.648 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:28.648 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:36:28.648 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:28.648 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:28.648 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:28.648 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:28.648 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:28.648 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:28.648 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:28.648 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:28.648 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:36:28.648 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:29.218 request: 00:36:29.218 { 00:36:29.218 "uuid": "f9ad4ba2-86a3-48d5-aa47-82d60732f95d", 00:36:29.218 "method": "bdev_lvol_get_lvstores", 00:36:29.218 "req_id": 1 00:36:29.218 } 00:36:29.218 Got JSON-RPC error response 00:36:29.218 response: 00:36:29.218 { 00:36:29.218 "code": -19, 00:36:29.218 "message": "No such device" 00:36:29.218 } 00:36:29.218 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:36:29.218 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:29.218 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:29.218 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:29.218 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:30.157 aio_bdev 00:36:30.157 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 92462dae-10aa-468a-9d9a-8ef785a63805 00:36:30.157 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=92462dae-10aa-468a-9d9a-8ef785a63805 00:36:30.157 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:30.157 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:36:30.157 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:30.157 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:30.157 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:30.727 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 92462dae-10aa-468a-9d9a-8ef785a63805 -t 2000 00:36:31.298 [ 00:36:31.298 { 00:36:31.298 "name": "92462dae-10aa-468a-9d9a-8ef785a63805", 00:36:31.298 "aliases": [ 00:36:31.298 "lvs/lvol" 00:36:31.298 ], 00:36:31.298 "product_name": "Logical Volume", 00:36:31.298 "block_size": 4096, 00:36:31.298 "num_blocks": 38912, 00:36:31.298 "uuid": "92462dae-10aa-468a-9d9a-8ef785a63805", 00:36:31.298 "assigned_rate_limits": { 00:36:31.298 "rw_ios_per_sec": 0, 00:36:31.298 "rw_mbytes_per_sec": 0, 00:36:31.298 "r_mbytes_per_sec": 0, 00:36:31.298 "w_mbytes_per_sec": 0 00:36:31.298 }, 00:36:31.298 "claimed": false, 00:36:31.298 "zoned": false, 00:36:31.298 "supported_io_types": { 00:36:31.298 "read": true, 00:36:31.298 "write": true, 00:36:31.298 "unmap": true, 00:36:31.298 "flush": false, 00:36:31.298 "reset": true, 00:36:31.298 "nvme_admin": false, 00:36:31.298 "nvme_io": false, 00:36:31.298 "nvme_io_md": false, 00:36:31.298 "write_zeroes": true, 00:36:31.298 "zcopy": false, 00:36:31.298 "get_zone_info": false, 00:36:31.298 "zone_management": false, 00:36:31.298 "zone_append": false, 00:36:31.298 "compare": false, 00:36:31.298 "compare_and_write": false, 00:36:31.298 "abort": false, 00:36:31.298 "seek_hole": true, 00:36:31.298 "seek_data": true, 00:36:31.298 "copy": false, 00:36:31.298 "nvme_iov_md": false 00:36:31.298 }, 00:36:31.298 "driver_specific": { 00:36:31.298 "lvol": { 00:36:31.298 "lvol_store_uuid": "f9ad4ba2-86a3-48d5-aa47-82d60732f95d", 00:36:31.298 "base_bdev": "aio_bdev", 00:36:31.298 "thin_provision": false, 00:36:31.298 "num_allocated_clusters": 38, 00:36:31.298 "snapshot": false, 00:36:31.298 "clone": false, 00:36:31.298 "esnap_clone": false 00:36:31.298 } 00:36:31.298 } 00:36:31.298 } 00:36:31.298 ] 00:36:31.298 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:36:31.298 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:31.298 18:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:32.237 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:32.237 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:32.237 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:32.808 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:32.808 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 92462dae-10aa-468a-9d9a-8ef785a63805 00:36:33.377 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9ad4ba2-86a3-48d5-aa47-82d60732f95d 00:36:34.317 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:34.577 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:34.577 00:36:34.577 real 0m31.712s 00:36:34.577 user 0m48.183s 00:36:34.577 sys 0m6.510s 00:36:34.577 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:34.577 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:34.577 ************************************ 00:36:34.577 END TEST lvs_grow_dirty 00:36:34.577 ************************************ 00:36:34.577 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:36:34.577 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:36:34.577 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:36:34.577 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:36:34.577 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:36:34.577 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:36:34.577 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:36:34.577 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:36:34.577 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:36:34.577 nvmf_trace.0 00:36:34.577 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:36:34.578 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:36:34.578 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:34.578 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:36:34.578 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:34.578 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:36:34.578 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:34.578 18:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:34.578 rmmod nvme_tcp 00:36:34.578 rmmod nvme_fabrics 00:36:34.578 rmmod nvme_keyring 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1371413 ']' 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1371413 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1371413 ']' 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1371413 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1371413 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1371413' 00:36:34.578 killing process with pid 1371413 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1371413 00:36:34.578 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1371413 00:36:35.149 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:35.149 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:35.149 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:35.149 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:36:35.149 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:36:35.149 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:35.149 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:36:35.149 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:35.149 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:35.149 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.149 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:35.149 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:37.058 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:37.058 00:36:37.058 real 1m4.903s 00:36:37.058 user 1m16.200s 00:36:37.058 sys 0m12.958s 00:36:37.058 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:37.058 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:37.058 ************************************ 00:36:37.058 END TEST nvmf_lvs_grow 00:36:37.058 ************************************ 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:37.319 ************************************ 00:36:37.319 START TEST nvmf_bdev_io_wait 00:36:37.319 ************************************ 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:36:37.319 * Looking for test storage... 00:36:37.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:36:37.319 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:36:37.320 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:36:37.320 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:37.320 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:36:37.320 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:36:37.320 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:37.320 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:37.320 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:36:37.320 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:36:37.320 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:37.320 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:36:37.320 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:36:37.581 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:36:37.581 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:36:37.581 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:37.581 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:36:37.581 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:36:37.581 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:37.581 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:37.581 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:36:37.581 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:37.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.582 --rc genhtml_branch_coverage=1 00:36:37.582 --rc genhtml_function_coverage=1 00:36:37.582 --rc genhtml_legend=1 00:36:37.582 --rc geninfo_all_blocks=1 00:36:37.582 --rc geninfo_unexecuted_blocks=1 00:36:37.582 00:36:37.582 ' 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:37.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.582 --rc genhtml_branch_coverage=1 00:36:37.582 --rc genhtml_function_coverage=1 00:36:37.582 --rc genhtml_legend=1 00:36:37.582 --rc geninfo_all_blocks=1 00:36:37.582 --rc geninfo_unexecuted_blocks=1 00:36:37.582 00:36:37.582 ' 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:37.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.582 --rc genhtml_branch_coverage=1 00:36:37.582 --rc genhtml_function_coverage=1 00:36:37.582 --rc genhtml_legend=1 00:36:37.582 --rc geninfo_all_blocks=1 00:36:37.582 --rc geninfo_unexecuted_blocks=1 00:36:37.582 00:36:37.582 ' 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:37.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.582 --rc genhtml_branch_coverage=1 00:36:37.582 --rc genhtml_function_coverage=1 00:36:37.582 --rc genhtml_legend=1 00:36:37.582 --rc geninfo_all_blocks=1 00:36:37.582 --rc geninfo_unexecuted_blocks=1 00:36:37.582 00:36:37.582 ' 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:36:37.582 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:40.161 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:40.162 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:40.162 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:40.162 Found net devices under 0000:84:00.0: cvl_0_0 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:40.162 Found net devices under 0000:84:00.1: cvl_0_1 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:40.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:40.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:36:40.162 00:36:40.162 --- 10.0.0.2 ping statistics --- 00:36:40.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.162 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:40.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:40.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:36:40.162 00:36:40.162 --- 10.0.0.1 ping statistics --- 00:36:40.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.162 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1374661 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1374661 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1374661 ']' 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:40.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:40.162 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:40.163 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:40.421 [2024-10-08 18:47:08.705621] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:40.421 [2024-10-08 18:47:08.706923] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:36:40.421 [2024-10-08 18:47:08.706992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:40.421 [2024-10-08 18:47:08.790406] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:40.421 [2024-10-08 18:47:08.930080] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:40.421 [2024-10-08 18:47:08.930150] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:40.421 [2024-10-08 18:47:08.930176] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:40.421 [2024-10-08 18:47:08.930193] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:40.421 [2024-10-08 18:47:08.930208] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:40.421 [2024-10-08 18:47:08.932309] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:40.421 [2024-10-08 18:47:08.932372] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:36:40.421 [2024-10-08 18:47:08.932445] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:36:40.421 [2024-10-08 18:47:08.932449] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:40.421 [2024-10-08 18:47:08.932993] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:40.680 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:40.680 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:36:40.680 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:40.680 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:40.680 18:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:40.680 [2024-10-08 18:47:09.102611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:40.680 [2024-10-08 18:47:09.102882] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:40.680 [2024-10-08 18:47:09.103960] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:40.680 [2024-10-08 18:47:09.104969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:40.680 [2024-10-08 18:47:09.113238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:40.680 Malloc0 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.680 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:40.681 [2024-10-08 18:47:09.189400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1374756 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1374758 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:40.681 { 00:36:40.681 "params": { 00:36:40.681 "name": "Nvme$subsystem", 00:36:40.681 "trtype": "$TEST_TRANSPORT", 00:36:40.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.681 "adrfam": "ipv4", 00:36:40.681 "trsvcid": "$NVMF_PORT", 00:36:40.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.681 "hdgst": ${hdgst:-false}, 00:36:40.681 "ddgst": ${ddgst:-false} 00:36:40.681 }, 00:36:40.681 "method": "bdev_nvme_attach_controller" 00:36:40.681 } 00:36:40.681 EOF 00:36:40.681 )") 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1374760 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:40.681 { 00:36:40.681 "params": { 00:36:40.681 "name": "Nvme$subsystem", 00:36:40.681 "trtype": "$TEST_TRANSPORT", 00:36:40.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.681 "adrfam": "ipv4", 00:36:40.681 "trsvcid": "$NVMF_PORT", 00:36:40.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.681 "hdgst": ${hdgst:-false}, 00:36:40.681 "ddgst": ${ddgst:-false} 00:36:40.681 }, 00:36:40.681 "method": "bdev_nvme_attach_controller" 00:36:40.681 } 00:36:40.681 EOF 00:36:40.681 )") 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1374763 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:40.681 { 00:36:40.681 "params": { 00:36:40.681 "name": "Nvme$subsystem", 00:36:40.681 "trtype": "$TEST_TRANSPORT", 00:36:40.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.681 "adrfam": "ipv4", 00:36:40.681 "trsvcid": "$NVMF_PORT", 00:36:40.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.681 "hdgst": ${hdgst:-false}, 00:36:40.681 "ddgst": ${ddgst:-false} 00:36:40.681 }, 00:36:40.681 "method": "bdev_nvme_attach_controller" 00:36:40.681 } 00:36:40.681 EOF 00:36:40.681 )") 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:40.681 { 00:36:40.681 "params": { 00:36:40.681 "name": "Nvme$subsystem", 00:36:40.681 "trtype": "$TEST_TRANSPORT", 00:36:40.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.681 "adrfam": "ipv4", 00:36:40.681 "trsvcid": "$NVMF_PORT", 00:36:40.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.681 "hdgst": ${hdgst:-false}, 00:36:40.681 "ddgst": ${ddgst:-false} 00:36:40.681 }, 00:36:40.681 "method": "bdev_nvme_attach_controller" 00:36:40.681 } 00:36:40.681 EOF 00:36:40.681 )") 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1374756 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:40.681 "params": { 00:36:40.681 "name": "Nvme1", 00:36:40.681 "trtype": "tcp", 00:36:40.681 "traddr": "10.0.0.2", 00:36:40.681 "adrfam": "ipv4", 00:36:40.681 "trsvcid": "4420", 00:36:40.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:40.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:40.681 "hdgst": false, 00:36:40.681 "ddgst": false 00:36:40.681 }, 00:36:40.681 "method": "bdev_nvme_attach_controller" 00:36:40.681 }' 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:40.681 "params": { 00:36:40.681 "name": "Nvme1", 00:36:40.681 "trtype": "tcp", 00:36:40.681 "traddr": "10.0.0.2", 00:36:40.681 "adrfam": "ipv4", 00:36:40.681 "trsvcid": "4420", 00:36:40.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:40.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:40.681 "hdgst": false, 00:36:40.681 "ddgst": false 00:36:40.681 }, 00:36:40.681 "method": "bdev_nvme_attach_controller" 00:36:40.681 }' 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:40.681 "params": { 00:36:40.681 "name": "Nvme1", 00:36:40.681 "trtype": "tcp", 00:36:40.681 "traddr": "10.0.0.2", 00:36:40.681 "adrfam": "ipv4", 00:36:40.681 "trsvcid": "4420", 00:36:40.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:40.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:40.681 "hdgst": false, 00:36:40.681 "ddgst": false 00:36:40.681 }, 00:36:40.681 "method": "bdev_nvme_attach_controller" 00:36:40.681 }' 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:36:40.681 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:40.681 "params": { 00:36:40.681 "name": "Nvme1", 00:36:40.681 "trtype": "tcp", 00:36:40.681 "traddr": "10.0.0.2", 00:36:40.682 "adrfam": "ipv4", 00:36:40.682 "trsvcid": "4420", 00:36:40.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:40.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:40.682 "hdgst": false, 00:36:40.682 "ddgst": false 00:36:40.682 }, 00:36:40.682 "method": "bdev_nvme_attach_controller" 00:36:40.682 }' 00:36:40.940 [2024-10-08 18:47:09.243310] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:36:40.940 [2024-10-08 18:47:09.243310] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:36:40.940 [2024-10-08 18:47:09.243310] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:36:40.940 [2024-10-08 18:47:09.243398] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-08 18:47:09.243398] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-08 18:47:09.243398] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:36:40.940 --proc-type=auto ] 00:36:40.940 --proc-type=auto ] 00:36:40.940 [2024-10-08 18:47:09.244144] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:36:40.940 [2024-10-08 18:47:09.244216] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:36:40.940 [2024-10-08 18:47:09.425464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.198 [2024-10-08 18:47:09.526432] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:36:41.198 [2024-10-08 18:47:09.546101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.198 [2024-10-08 18:47:09.647317] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:36:41.198 [2024-10-08 18:47:09.680249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.456 [2024-10-08 18:47:09.737743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.456 [2024-10-08 18:47:09.788491] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:36:41.456 [2024-10-08 18:47:09.833454] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:36:41.715 Running I/O for 1 seconds... 00:36:41.715 Running I/O for 1 seconds... 00:36:41.715 Running I/O for 1 seconds... 00:36:41.973 Running I/O for 1 seconds... 00:36:42.797 10300.00 IOPS, 40.23 MiB/s 00:36:42.797 Latency(us) 00:36:42.797 [2024-10-08T16:47:11.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.797 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:36:42.797 Nvme1n1 : 1.01 10362.13 40.48 0.00 0.00 12304.86 5388.52 14563.56 00:36:42.797 [2024-10-08T16:47:11.334Z] =================================================================================================================== 00:36:42.797 [2024-10-08T16:47:11.334Z] Total : 10362.13 40.48 0.00 0.00 12304.86 5388.52 14563.56 00:36:42.797 8981.00 IOPS, 35.08 MiB/s 00:36:42.797 Latency(us) 00:36:42.797 [2024-10-08T16:47:11.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.797 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:36:42.797 Nvme1n1 : 1.01 9032.16 35.28 0.00 0.00 14106.58 5170.06 18544.26 00:36:42.797 [2024-10-08T16:47:11.334Z] =================================================================================================================== 00:36:42.797 [2024-10-08T16:47:11.334Z] Total : 9032.16 35.28 0.00 0.00 14106.58 5170.06 18544.26 00:36:42.797 200944.00 IOPS, 784.94 MiB/s 00:36:42.797 Latency(us) 00:36:42.797 [2024-10-08T16:47:11.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.797 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:36:42.797 Nvme1n1 : 1.00 200569.11 783.47 0.00 0.00 634.73 306.44 1868.99 00:36:42.797 [2024-10-08T16:47:11.334Z] =================================================================================================================== 00:36:42.797 [2024-10-08T16:47:11.334Z] Total : 200569.11 783.47 0.00 0.00 634.73 306.44 1868.99 00:36:42.797 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1374758 00:36:43.056 9966.00 IOPS, 38.93 MiB/s 00:36:43.056 Latency(us) 00:36:43.056 [2024-10-08T16:47:11.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.056 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:36:43.056 Nvme1n1 : 1.01 10045.79 39.24 0.00 0.00 12699.66 2609.30 19126.80 00:36:43.056 [2024-10-08T16:47:11.593Z] =================================================================================================================== 00:36:43.056 [2024-10-08T16:47:11.593Z] Total : 10045.79 39.24 0.00 0.00 12699.66 2609.30 19126.80 00:36:43.056 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1374760 00:36:43.056 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1374763 00:36:43.316 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:43.317 rmmod nvme_tcp 00:36:43.317 rmmod nvme_fabrics 00:36:43.317 rmmod nvme_keyring 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1374661 ']' 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1374661 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1374661 ']' 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1374661 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1374661 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1374661' 00:36:43.317 killing process with pid 1374661 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1374661 00:36:43.317 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1374661 00:36:43.888 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:43.888 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:43.888 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:43.888 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:36:43.888 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:36:43.888 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:43.888 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:36:43.888 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:43.888 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:43.888 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.888 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:43.888 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:45.795 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:45.795 00:36:45.795 real 0m8.589s 00:36:45.795 user 0m17.083s 00:36:45.795 sys 0m5.173s 00:36:45.795 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:45.795 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:45.795 ************************************ 00:36:45.795 END TEST nvmf_bdev_io_wait 00:36:45.795 ************************************ 00:36:45.795 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:36:45.795 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:45.795 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:45.795 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:45.795 ************************************ 00:36:45.795 START TEST nvmf_queue_depth 00:36:45.795 ************************************ 00:36:45.795 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:36:46.055 * Looking for test storage... 00:36:46.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:36:46.055 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:46.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.056 --rc genhtml_branch_coverage=1 00:36:46.056 --rc genhtml_function_coverage=1 00:36:46.056 --rc genhtml_legend=1 00:36:46.056 --rc geninfo_all_blocks=1 00:36:46.056 --rc geninfo_unexecuted_blocks=1 00:36:46.056 00:36:46.056 ' 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:46.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.056 --rc genhtml_branch_coverage=1 00:36:46.056 --rc genhtml_function_coverage=1 00:36:46.056 --rc genhtml_legend=1 00:36:46.056 --rc geninfo_all_blocks=1 00:36:46.056 --rc geninfo_unexecuted_blocks=1 00:36:46.056 00:36:46.056 ' 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:46.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.056 --rc genhtml_branch_coverage=1 00:36:46.056 --rc genhtml_function_coverage=1 00:36:46.056 --rc genhtml_legend=1 00:36:46.056 --rc geninfo_all_blocks=1 00:36:46.056 --rc geninfo_unexecuted_blocks=1 00:36:46.056 00:36:46.056 ' 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:46.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:46.056 --rc genhtml_branch_coverage=1 00:36:46.056 --rc genhtml_function_coverage=1 00:36:46.056 --rc genhtml_legend=1 00:36:46.056 --rc geninfo_all_blocks=1 00:36:46.056 --rc geninfo_unexecuted_blocks=1 00:36:46.056 00:36:46.056 ' 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:36:46.056 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:49.352 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:49.352 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:36:49.352 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:49.352 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:49.352 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:49.352 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:36:49.353 Found 0000:84:00.0 (0x8086 - 0x159b) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:36:49.353 Found 0000:84:00.1 (0x8086 - 0x159b) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:36:49.353 Found net devices under 0000:84:00.0: cvl_0_0 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:36:49.353 Found net devices under 0000:84:00.1: cvl_0_1 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:49.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:49.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:36:49.353 00:36:49.353 --- 10.0.0.2 ping statistics --- 00:36:49.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.353 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:36:49.353 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:49.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:49.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:36:49.353 00:36:49.353 --- 10.0.0.1 ping statistics --- 00:36:49.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.354 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1377123 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1377123 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1377123 ']' 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:49.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:49.354 18:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:49.354 [2024-10-08 18:47:17.629340] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:49.354 [2024-10-08 18:47:17.630665] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:36:49.354 [2024-10-08 18:47:17.630750] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:49.354 [2024-10-08 18:47:17.747801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:49.612 [2024-10-08 18:47:17.964136] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:49.612 [2024-10-08 18:47:17.964248] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:49.612 [2024-10-08 18:47:17.964301] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:49.612 [2024-10-08 18:47:17.964332] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:49.612 [2024-10-08 18:47:17.964359] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:49.612 [2024-10-08 18:47:17.965621] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:49.612 [2024-10-08 18:47:18.126703] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:49.612 [2024-10-08 18:47:18.127415] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:50.993 [2024-10-08 18:47:19.142819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:50.993 Malloc0 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:50.993 [2024-10-08 18:47:19.251099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1377286 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1377286 /var/tmp/bdevperf.sock 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1377286 ']' 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:50.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:50.993 18:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:50.993 [2024-10-08 18:47:19.314137] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:36:50.993 [2024-10-08 18:47:19.314247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377286 ] 00:36:50.993 [2024-10-08 18:47:19.418115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.252 [2024-10-08 18:47:19.624562] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:52.631 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:52.631 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:36:52.631 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:52.631 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.631 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:52.631 NVMe0n1 00:36:52.631 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.631 18:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:52.631 Running I/O for 10 seconds... 00:36:54.505 3213.00 IOPS, 12.55 MiB/s [2024-10-08T16:47:23.978Z] 3584.00 IOPS, 14.00 MiB/s [2024-10-08T16:47:25.354Z] 4100.67 IOPS, 16.02 MiB/s [2024-10-08T16:47:26.290Z] 4743.00 IOPS, 18.53 MiB/s [2024-10-08T16:47:27.225Z] 4529.20 IOPS, 17.69 MiB/s [2024-10-08T16:47:28.162Z] 4862.33 IOPS, 18.99 MiB/s [2024-10-08T16:47:29.097Z] 4821.57 IOPS, 18.83 MiB/s [2024-10-08T16:47:30.034Z] 4626.12 IOPS, 18.07 MiB/s [2024-10-08T16:47:31.412Z] 4551.33 IOPS, 17.78 MiB/s [2024-10-08T16:47:31.412Z] 4493.70 IOPS, 17.55 MiB/s 00:37:02.875 Latency(us) 00:37:02.875 [2024-10-08T16:47:31.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.875 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:37:02.875 Verification LBA range: start 0x0 length 0x4000 00:37:02.875 NVMe0n1 : 10.24 4483.93 17.52 0.00 0.00 226032.33 51652.08 166218.71 00:37:02.875 [2024-10-08T16:47:31.412Z] =================================================================================================================== 00:37:02.875 [2024-10-08T16:47:31.412Z] Total : 4483.93 17.52 0.00 0.00 226032.33 51652.08 166218.71 00:37:02.875 { 00:37:02.875 "results": [ 00:37:02.875 { 00:37:02.875 "job": "NVMe0n1", 00:37:02.875 "core_mask": "0x1", 00:37:02.875 "workload": "verify", 00:37:02.875 "status": "finished", 00:37:02.875 "verify_range": { 00:37:02.875 "start": 0, 00:37:02.875 "length": 16384 00:37:02.875 }, 00:37:02.875 "queue_depth": 1024, 00:37:02.875 "io_size": 4096, 00:37:02.875 "runtime": 10.236322, 00:37:02.875 "iops": 4483.934757034802, 00:37:02.875 "mibps": 17.515370144667195, 00:37:02.875 "io_failed": 0, 00:37:02.875 "io_timeout": 0, 00:37:02.875 "avg_latency_us": 226032.3346896447, 00:37:02.875 "min_latency_us": 51652.07703703704, 00:37:02.875 "max_latency_us": 166218.7140740741 00:37:02.875 } 00:37:02.875 ], 00:37:02.875 "core_count": 1 00:37:02.875 } 00:37:02.875 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1377286 00:37:02.875 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1377286 ']' 00:37:02.875 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1377286 00:37:02.875 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:37:02.875 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:02.875 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1377286 00:37:02.875 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:02.875 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:02.875 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1377286' 00:37:02.875 killing process with pid 1377286 00:37:02.875 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1377286 00:37:02.875 Received shutdown signal, test time was about 10.000000 seconds 00:37:02.875 00:37:02.875 Latency(us) 00:37:02.875 [2024-10-08T16:47:31.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.875 [2024-10-08T16:47:31.412Z] =================================================================================================================== 00:37:02.875 [2024-10-08T16:47:31.412Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:02.875 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1377286 00:37:03.134 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:37:03.134 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:37:03.134 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:03.134 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:37:03.134 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:03.134 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:37:03.134 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:03.134 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:03.134 rmmod nvme_tcp 00:37:03.393 rmmod nvme_fabrics 00:37:03.393 rmmod nvme_keyring 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1377123 ']' 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1377123 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1377123 ']' 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1377123 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1377123 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1377123' 00:37:03.393 killing process with pid 1377123 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1377123 00:37:03.393 18:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1377123 00:37:03.652 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:03.652 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:03.652 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:03.652 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:37:03.652 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:37:03.652 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:37:03.652 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:03.652 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:03.652 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:03.653 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.653 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:03.653 18:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:06.186 00:37:06.186 real 0m19.803s 00:37:06.186 user 0m26.258s 00:37:06.186 sys 0m4.932s 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:06.186 ************************************ 00:37:06.186 END TEST nvmf_queue_depth 00:37:06.186 ************************************ 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:06.186 ************************************ 00:37:06.186 START TEST nvmf_target_multipath 00:37:06.186 ************************************ 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:06.186 * Looking for test storage... 00:37:06.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:37:06.186 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:06.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.187 --rc genhtml_branch_coverage=1 00:37:06.187 --rc genhtml_function_coverage=1 00:37:06.187 --rc genhtml_legend=1 00:37:06.187 --rc geninfo_all_blocks=1 00:37:06.187 --rc geninfo_unexecuted_blocks=1 00:37:06.187 00:37:06.187 ' 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:06.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.187 --rc genhtml_branch_coverage=1 00:37:06.187 --rc genhtml_function_coverage=1 00:37:06.187 --rc genhtml_legend=1 00:37:06.187 --rc geninfo_all_blocks=1 00:37:06.187 --rc geninfo_unexecuted_blocks=1 00:37:06.187 00:37:06.187 ' 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:06.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.187 --rc genhtml_branch_coverage=1 00:37:06.187 --rc genhtml_function_coverage=1 00:37:06.187 --rc genhtml_legend=1 00:37:06.187 --rc geninfo_all_blocks=1 00:37:06.187 --rc geninfo_unexecuted_blocks=1 00:37:06.187 00:37:06.187 ' 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:06.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:06.187 --rc genhtml_branch_coverage=1 00:37:06.187 --rc genhtml_function_coverage=1 00:37:06.187 --rc genhtml_legend=1 00:37:06.187 --rc geninfo_all_blocks=1 00:37:06.187 --rc geninfo_unexecuted_blocks=1 00:37:06.187 00:37:06.187 ' 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:06.187 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:06.188 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:37:06.188 18:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:09.478 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:09.479 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:09.479 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:09.479 Found net devices under 0000:84:00.0: cvl_0_0 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:09.479 Found net devices under 0000:84:00.1: cvl_0_1 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:09.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:09.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:37:09.479 00:37:09.479 --- 10.0.0.2 ping statistics --- 00:37:09.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:09.479 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:09.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:09.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:37:09.479 00:37:09.479 --- 10.0.0.1 ping statistics --- 00:37:09.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:09.479 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:37:09.479 only one NIC for nvmf test 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:09.479 rmmod nvme_tcp 00:37:09.479 rmmod nvme_fabrics 00:37:09.479 rmmod nvme_keyring 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:09.479 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:37:09.480 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:37:09.480 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:09.480 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:37:09.480 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:09.480 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:09.480 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:09.480 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:09.480 18:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:37:11.382 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:11.383 00:37:11.383 real 0m5.527s 00:37:11.383 user 0m1.155s 00:37:11.383 sys 0m2.384s 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:11.383 ************************************ 00:37:11.383 END TEST nvmf_target_multipath 00:37:11.383 ************************************ 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:11.383 ************************************ 00:37:11.383 START TEST nvmf_zcopy 00:37:11.383 ************************************ 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:37:11.383 * Looking for test storage... 00:37:11.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:37:11.383 18:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:11.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.643 --rc genhtml_branch_coverage=1 00:37:11.643 --rc genhtml_function_coverage=1 00:37:11.643 --rc genhtml_legend=1 00:37:11.643 --rc geninfo_all_blocks=1 00:37:11.643 --rc geninfo_unexecuted_blocks=1 00:37:11.643 00:37:11.643 ' 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:11.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.643 --rc genhtml_branch_coverage=1 00:37:11.643 --rc genhtml_function_coverage=1 00:37:11.643 --rc genhtml_legend=1 00:37:11.643 --rc geninfo_all_blocks=1 00:37:11.643 --rc geninfo_unexecuted_blocks=1 00:37:11.643 00:37:11.643 ' 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:11.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.643 --rc genhtml_branch_coverage=1 00:37:11.643 --rc genhtml_function_coverage=1 00:37:11.643 --rc genhtml_legend=1 00:37:11.643 --rc geninfo_all_blocks=1 00:37:11.643 --rc geninfo_unexecuted_blocks=1 00:37:11.643 00:37:11.643 ' 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:11.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.643 --rc genhtml_branch_coverage=1 00:37:11.643 --rc genhtml_function_coverage=1 00:37:11.643 --rc genhtml_legend=1 00:37:11.643 --rc geninfo_all_blocks=1 00:37:11.643 --rc geninfo_unexecuted_blocks=1 00:37:11.643 00:37:11.643 ' 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:11.643 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:37:11.644 18:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.933 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:14.934 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:14.934 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:14.934 Found net devices under 0000:84:00.0: cvl_0_0 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:14.934 Found net devices under 0000:84:00.1: cvl_0_1 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:37:14.934 00:37:14.934 --- 10.0.0.2 ping statistics --- 00:37:14.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.934 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:37:14.934 00:37:14.934 --- 10.0.0.1 ping statistics --- 00:37:14.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.934 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1382808 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1382808 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1382808 ']' 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:14.934 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:14.934 [2024-10-08 18:47:43.374390] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:14.934 [2024-10-08 18:47:43.375772] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:37:14.934 [2024-10-08 18:47:43.375836] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.934 [2024-10-08 18:47:43.452587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.193 [2024-10-08 18:47:43.577812] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:15.193 [2024-10-08 18:47:43.577895] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:15.193 [2024-10-08 18:47:43.577911] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:15.193 [2024-10-08 18:47:43.577925] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:15.193 [2024-10-08 18:47:43.577937] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:15.193 [2024-10-08 18:47:43.578635] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:15.193 [2024-10-08 18:47:43.721017] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:15.193 [2024-10-08 18:47:43.721741] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:15.452 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:15.453 [2024-10-08 18:47:43.787529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:15.453 [2024-10-08 18:47:43.815901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:15.453 malloc0 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:15.453 { 00:37:15.453 "params": { 00:37:15.453 "name": "Nvme$subsystem", 00:37:15.453 "trtype": "$TEST_TRANSPORT", 00:37:15.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:15.453 "adrfam": "ipv4", 00:37:15.453 "trsvcid": "$NVMF_PORT", 00:37:15.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:15.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:15.453 "hdgst": ${hdgst:-false}, 00:37:15.453 "ddgst": ${ddgst:-false} 00:37:15.453 }, 00:37:15.453 "method": "bdev_nvme_attach_controller" 00:37:15.453 } 00:37:15.453 EOF 00:37:15.453 )") 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:37:15.453 18:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:15.453 "params": { 00:37:15.453 "name": "Nvme1", 00:37:15.453 "trtype": "tcp", 00:37:15.453 "traddr": "10.0.0.2", 00:37:15.453 "adrfam": "ipv4", 00:37:15.453 "trsvcid": "4420", 00:37:15.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:15.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:15.453 "hdgst": false, 00:37:15.453 "ddgst": false 00:37:15.453 }, 00:37:15.453 "method": "bdev_nvme_attach_controller" 00:37:15.453 }' 00:37:15.453 [2024-10-08 18:47:43.944735] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:37:15.453 [2024-10-08 18:47:43.944834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382895 ] 00:37:15.714 [2024-10-08 18:47:44.073622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.974 [2024-10-08 18:47:44.270847] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.232 Running I/O for 10 seconds... 00:37:18.101 2755.00 IOPS, 21.52 MiB/s [2024-10-08T16:47:47.622Z] 2585.00 IOPS, 20.20 MiB/s [2024-10-08T16:47:48.562Z] 3049.00 IOPS, 23.82 MiB/s [2024-10-08T16:47:49.938Z] 3389.00 IOPS, 26.48 MiB/s [2024-10-08T16:47:50.873Z] 3187.40 IOPS, 24.90 MiB/s [2024-10-08T16:47:51.808Z] 3294.67 IOPS, 25.74 MiB/s [2024-10-08T16:47:52.742Z] 3261.86 IOPS, 25.48 MiB/s [2024-10-08T16:47:53.677Z] 3149.00 IOPS, 24.60 MiB/s [2024-10-08T16:47:54.612Z] 3079.67 IOPS, 24.06 MiB/s [2024-10-08T16:47:54.612Z] 3016.40 IOPS, 23.57 MiB/s 00:37:26.075 Latency(us) 00:37:26.075 [2024-10-08T16:47:54.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.075 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:37:26.075 Verification LBA range: start 0x0 length 0x1000 00:37:26.075 Nvme1n1 : 10.04 3017.09 23.57 0.00 0.00 42268.78 5437.06 57089.14 00:37:26.075 [2024-10-08T16:47:54.612Z] =================================================================================================================== 00:37:26.075 [2024-10-08T16:47:54.612Z] Total : 3017.09 23.57 0.00 0.00 42268.78 5437.06 57089.14 00:37:26.643 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1384110 00:37:26.643 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:37:26.643 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:26.643 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:37:26.643 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:37:26.643 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:37:26.643 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:37:26.643 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:26.643 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:26.643 { 00:37:26.643 "params": { 00:37:26.643 "name": "Nvme$subsystem", 00:37:26.643 "trtype": "$TEST_TRANSPORT", 00:37:26.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:26.643 "adrfam": "ipv4", 00:37:26.643 "trsvcid": "$NVMF_PORT", 00:37:26.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:26.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:26.643 "hdgst": ${hdgst:-false}, 00:37:26.643 "ddgst": ${ddgst:-false} 00:37:26.643 }, 00:37:26.643 "method": "bdev_nvme_attach_controller" 00:37:26.643 } 00:37:26.643 EOF 00:37:26.643 )") 00:37:26.643 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:37:26.643 [2024-10-08 18:47:54.963241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:54.963288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:37:26.643 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:37:26.643 18:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:26.643 "params": { 00:37:26.643 "name": "Nvme1", 00:37:26.643 "trtype": "tcp", 00:37:26.643 "traddr": "10.0.0.2", 00:37:26.643 "adrfam": "ipv4", 00:37:26.643 "trsvcid": "4420", 00:37:26.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:26.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:26.643 "hdgst": false, 00:37:26.643 "ddgst": false 00:37:26.643 }, 00:37:26.643 "method": "bdev_nvme_attach_controller" 00:37:26.643 }' 00:37:26.643 [2024-10-08 18:47:54.971165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:54.971192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:54.979162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:54.979186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:54.987161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:54.987194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:54.995162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:54.995185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.003161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.003185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.011161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.011185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.011747] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:37:26.643 [2024-10-08 18:47:55.011834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384110 ] 00:37:26.643 [2024-10-08 18:47:55.019161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.019186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.027161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.027185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.035162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.035185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.043160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.043183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.051161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.051184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.059160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.059183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.067161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.067183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.075161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.075184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.083160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.083182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.085392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:26.643 [2024-10-08 18:47:55.091178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.091208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.099193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.099231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.107163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.107186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.115168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.115193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.123162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.123186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.131162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.131185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.139147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.139165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.147146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.147166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.155178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.155212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.163167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.163196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.171147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.171166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.643 [2024-10-08 18:47:55.179147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.643 [2024-10-08 18:47:55.179166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.187146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.187166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.195146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.195166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.203148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.203167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.206784] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:26.902 [2024-10-08 18:47:55.211147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.211167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.219149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.219168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.227180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.227210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.235178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.235212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.243182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.243219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.251180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.251216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.259175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.259207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.267183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.267228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.275155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.275179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.283171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.283202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.291182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.291215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.299182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.299217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.307151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.307173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.315152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.315173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.323154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.323177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.331154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.331178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.339152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.339174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.347152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.347174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.355158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.355192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.363154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.363184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.371148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.371168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.379146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.379165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.387146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.387166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.395146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.395165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.403150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.403170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.411149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.411171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.419146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.419173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.427147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.427166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:26.902 [2024-10-08 18:47:55.435147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:26.902 [2024-10-08 18:47:55.435167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.443146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.443166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.451151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.451173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.459147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.459167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.467146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.467165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.475147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.475166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.483145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.483164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.491149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.491170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.499153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.499175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.507153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.507177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.515152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.515175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 Running I/O for 5 seconds... 00:37:27.161 [2024-10-08 18:47:55.529477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.529505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.545119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.545144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.554723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.554749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.566596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.566621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.577264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.577288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.592348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.592372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.601701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.601727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.613981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.614006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.628247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.628272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.638379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.638404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.650163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.161 [2024-10-08 18:47:55.650187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.161 [2024-10-08 18:47:55.661007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.162 [2024-10-08 18:47:55.661032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.162 [2024-10-08 18:47:55.676543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.162 [2024-10-08 18:47:55.676575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.162 [2024-10-08 18:47:55.686134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.162 [2024-10-08 18:47:55.686158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.699996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.700021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.709546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.709571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.720950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.720975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.731927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.731967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.743242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.743272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.753824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.753849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.767866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.767894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.777205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.777230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.788973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.788997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.804916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.804957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.814695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.814721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.826187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.826211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.838680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.838722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.848240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.848265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.860488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.860513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.871732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.871758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.882835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.882861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.893837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.893864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.908963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.908989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.918625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.918673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.930293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.930317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.943983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.944012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.420 [2024-10-08 18:47:55.953299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.420 [2024-10-08 18:47:55.953323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:55.965048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:55.965074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:55.979901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:55.979952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:55.994166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:55.994234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:56.015874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:56.015900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:56.035800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:56.035826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:56.054853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:56.054879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:56.074825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:56.074851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:56.093231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:56.093256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:56.110899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:56.110927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:56.127136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:56.127213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:56.146939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:56.146982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:56.164786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:56.164813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:56.185741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:56.185766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.679 [2024-10-08 18:47:56.205097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.679 [2024-10-08 18:47:56.205164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.937 [2024-10-08 18:47:56.223808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.937 [2024-10-08 18:47:56.223834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.937 [2024-10-08 18:47:56.243354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.937 [2024-10-08 18:47:56.243421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.937 [2024-10-08 18:47:56.261859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.937 [2024-10-08 18:47:56.261884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.937 [2024-10-08 18:47:56.280114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.937 [2024-10-08 18:47:56.280181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.937 [2024-10-08 18:47:56.298857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.937 [2024-10-08 18:47:56.298882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.937 [2024-10-08 18:47:56.319541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.937 [2024-10-08 18:47:56.319609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.937 [2024-10-08 18:47:56.338766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.937 [2024-10-08 18:47:56.338792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.937 [2024-10-08 18:47:56.358834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.937 [2024-10-08 18:47:56.358860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.937 [2024-10-08 18:47:56.377937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.937 [2024-10-08 18:47:56.377975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.937 [2024-10-08 18:47:56.397593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.937 [2024-10-08 18:47:56.397678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.937 [2024-10-08 18:47:56.417919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.937 [2024-10-08 18:47:56.417959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.937 [2024-10-08 18:47:56.437819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.937 [2024-10-08 18:47:56.437845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:27.937 [2024-10-08 18:47:56.457622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:27.937 [2024-10-08 18:47:56.457709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 [2024-10-08 18:47:56.476524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.476594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 [2024-10-08 18:47:56.501149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.501218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 [2024-10-08 18:47:56.518851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.518876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 8952.00 IOPS, 69.94 MiB/s [2024-10-08T16:47:56.733Z] [2024-10-08 18:47:56.539316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.539382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 [2024-10-08 18:47:56.557994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.558059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 [2024-10-08 18:47:56.576753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.576778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 [2024-10-08 18:47:56.597465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.597532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 [2024-10-08 18:47:56.616583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.616666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 [2024-10-08 18:47:56.635064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.635135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 [2024-10-08 18:47:56.655870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.655896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 [2024-10-08 18:47:56.675236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.675304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 [2024-10-08 18:47:56.693889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.693914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 [2024-10-08 18:47:56.713042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.713108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.196 [2024-10-08 18:47:56.732054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.196 [2024-10-08 18:47:56.732079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.454 [2024-10-08 18:47:56.748885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.454 [2024-10-08 18:47:56.748915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.454 [2024-10-08 18:47:56.772826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.454 [2024-10-08 18:47:56.772856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.454 [2024-10-08 18:47:56.796785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.454 [2024-10-08 18:47:56.796816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.454 [2024-10-08 18:47:56.817787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.454 [2024-10-08 18:47:56.817826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.454 [2024-10-08 18:47:56.839950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.454 [2024-10-08 18:47:56.840018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.454 [2024-10-08 18:47:56.862160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.454 [2024-10-08 18:47:56.862227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.454 [2024-10-08 18:47:56.884613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.454 [2024-10-08 18:47:56.884705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.454 [2024-10-08 18:47:56.906433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.454 [2024-10-08 18:47:56.906499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.454 [2024-10-08 18:47:56.928170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.454 [2024-10-08 18:47:56.928236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.454 [2024-10-08 18:47:56.950151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.454 [2024-10-08 18:47:56.950218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.454 [2024-10-08 18:47:56.972526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.454 [2024-10-08 18:47:56.972592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.713 [2024-10-08 18:47:56.994822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.713 [2024-10-08 18:47:56.994851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.713 [2024-10-08 18:47:57.016829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.713 [2024-10-08 18:47:57.016859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.713 [2024-10-08 18:47:57.038044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.713 [2024-10-08 18:47:57.038111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.713 [2024-10-08 18:47:57.059774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.713 [2024-10-08 18:47:57.059805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.713 [2024-10-08 18:47:57.081600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.713 [2024-10-08 18:47:57.081684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.713 [2024-10-08 18:47:57.104867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.713 [2024-10-08 18:47:57.104897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.713 [2024-10-08 18:47:57.126155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.713 [2024-10-08 18:47:57.126221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.713 [2024-10-08 18:47:57.147894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.713 [2024-10-08 18:47:57.147972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.713 [2024-10-08 18:47:57.170332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.713 [2024-10-08 18:47:57.170400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.713 [2024-10-08 18:47:57.190977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.713 [2024-10-08 18:47:57.191044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.713 [2024-10-08 18:47:57.212741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.713 [2024-10-08 18:47:57.212771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.713 [2024-10-08 18:47:57.233676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.713 [2024-10-08 18:47:57.233735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.971 [2024-10-08 18:47:57.252948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.971 [2024-10-08 18:47:57.252979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.971 [2024-10-08 18:47:57.274093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.971 [2024-10-08 18:47:57.274161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.971 [2024-10-08 18:47:57.294437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.971 [2024-10-08 18:47:57.294467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.971 [2024-10-08 18:47:57.315272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.971 [2024-10-08 18:47:57.315302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.971 [2024-10-08 18:47:57.333677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.971 [2024-10-08 18:47:57.333720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.971 [2024-10-08 18:47:57.353364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.971 [2024-10-08 18:47:57.353432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.971 [2024-10-08 18:47:57.373329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.971 [2024-10-08 18:47:57.373399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.971 [2024-10-08 18:47:57.394143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.971 [2024-10-08 18:47:57.394212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.971 [2024-10-08 18:47:57.416489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.971 [2024-10-08 18:47:57.416562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.971 [2024-10-08 18:47:57.438471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.972 [2024-10-08 18:47:57.438538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.972 [2024-10-08 18:47:57.460592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.972 [2024-10-08 18:47:57.460676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.972 [2024-10-08 18:47:57.481845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.972 [2024-10-08 18:47:57.481875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:28.972 [2024-10-08 18:47:57.503774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:28.972 [2024-10-08 18:47:57.503804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.230 7531.50 IOPS, 58.84 MiB/s [2024-10-08T16:47:57.767Z] [2024-10-08 18:47:57.525882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.230 [2024-10-08 18:47:57.525912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.230 [2024-10-08 18:47:57.548275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.230 [2024-10-08 18:47:57.548341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.230 [2024-10-08 18:47:57.569031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.230 [2024-10-08 18:47:57.569097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.230 [2024-10-08 18:47:57.591056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.230 [2024-10-08 18:47:57.591128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.230 [2024-10-08 18:47:57.613763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.230 [2024-10-08 18:47:57.613794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.230 [2024-10-08 18:47:57.636726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.230 [2024-10-08 18:47:57.636756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.230 [2024-10-08 18:47:57.658518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.230 [2024-10-08 18:47:57.658586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.230 [2024-10-08 18:47:57.680751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.230 [2024-10-08 18:47:57.680782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.230 [2024-10-08 18:47:57.702421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.230 [2024-10-08 18:47:57.702490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.230 [2024-10-08 18:47:57.724159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.230 [2024-10-08 18:47:57.724227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.230 [2024-10-08 18:47:57.746494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.230 [2024-10-08 18:47:57.746560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.488 [2024-10-08 18:47:57.769524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.488 [2024-10-08 18:47:57.769593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.488 [2024-10-08 18:47:57.791041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.488 [2024-10-08 18:47:57.791113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.488 [2024-10-08 18:47:57.809807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.488 [2024-10-08 18:47:57.809837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.488 [2024-10-08 18:47:57.831224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.488 [2024-10-08 18:47:57.831291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.488 [2024-10-08 18:47:57.852777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.488 [2024-10-08 18:47:57.852807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.488 [2024-10-08 18:47:57.874258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.488 [2024-10-08 18:47:57.874326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.488 [2024-10-08 18:47:57.896909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.488 [2024-10-08 18:47:57.896953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.488 [2024-10-08 18:47:57.918741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.488 [2024-10-08 18:47:57.918771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.488 [2024-10-08 18:47:57.941397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.488 [2024-10-08 18:47:57.941465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.488 [2024-10-08 18:47:57.965819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.488 [2024-10-08 18:47:57.965849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.488 [2024-10-08 18:47:57.987711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.488 [2024-10-08 18:47:57.987741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.488 [2024-10-08 18:47:58.009927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.488 [2024-10-08 18:47:58.009956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.746 [2024-10-08 18:47:58.032841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.746 [2024-10-08 18:47:58.032881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.746 [2024-10-08 18:47:58.056845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.746 [2024-10-08 18:47:58.056875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.746 [2024-10-08 18:47:58.079783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.746 [2024-10-08 18:47:58.079813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.746 [2024-10-08 18:47:58.101831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.746 [2024-10-08 18:47:58.101861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.746 [2024-10-08 18:47:58.125584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.746 [2024-10-08 18:47:58.125671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.746 [2024-10-08 18:47:58.147513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.746 [2024-10-08 18:47:58.147587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.746 [2024-10-08 18:47:58.169826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.746 [2024-10-08 18:47:58.169856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.746 [2024-10-08 18:47:58.191331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.746 [2024-10-08 18:47:58.191398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.746 [2024-10-08 18:47:58.213759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.746 [2024-10-08 18:47:58.213790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.746 [2024-10-08 18:47:58.234851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.746 [2024-10-08 18:47:58.234882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.746 [2024-10-08 18:47:58.256399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.746 [2024-10-08 18:47:58.256467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.746 [2024-10-08 18:47:58.278373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:29.746 [2024-10-08 18:47:58.278441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.007 [2024-10-08 18:47:58.299800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.007 [2024-10-08 18:47:58.299830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.007 [2024-10-08 18:47:58.321841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.007 [2024-10-08 18:47:58.321871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.007 [2024-10-08 18:47:58.343763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.007 [2024-10-08 18:47:58.343793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.007 [2024-10-08 18:47:58.365917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.007 [2024-10-08 18:47:58.365947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.007 [2024-10-08 18:47:58.388099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.007 [2024-10-08 18:47:58.388167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.007 [2024-10-08 18:47:58.409381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.007 [2024-10-08 18:47:58.409448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.007 [2024-10-08 18:47:58.430635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.007 [2024-10-08 18:47:58.430723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.007 [2024-10-08 18:47:58.454739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.007 [2024-10-08 18:47:58.454771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.007 [2024-10-08 18:47:58.471712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.007 [2024-10-08 18:47:58.471742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.007 [2024-10-08 18:47:58.490156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.007 [2024-10-08 18:47:58.490223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.007 [2024-10-08 18:47:58.510005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.007 [2024-10-08 18:47:58.510035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.007 6968.00 IOPS, 54.44 MiB/s [2024-10-08T16:47:58.544Z] [2024-10-08 18:47:58.528916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.007 [2024-10-08 18:47:58.528947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.265 [2024-10-08 18:47:58.547007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.265 [2024-10-08 18:47:58.547079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.265 [2024-10-08 18:47:58.567405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.265 [2024-10-08 18:47:58.567472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.265 [2024-10-08 18:47:58.588058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.265 [2024-10-08 18:47:58.588125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.265 [2024-10-08 18:47:58.608303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.265 [2024-10-08 18:47:58.608369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.265 [2024-10-08 18:47:58.629778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.265 [2024-10-08 18:47:58.629809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.265 [2024-10-08 18:47:58.649831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.265 [2024-10-08 18:47:58.649861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.265 [2024-10-08 18:47:58.671376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.265 [2024-10-08 18:47:58.671444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.265 [2024-10-08 18:47:58.690675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.265 [2024-10-08 18:47:58.690723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.265 [2024-10-08 18:47:58.712225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.265 [2024-10-08 18:47:58.712292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.265 [2024-10-08 18:47:58.732143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.265 [2024-10-08 18:47:58.732209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.265 [2024-10-08 18:47:58.752777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.265 [2024-10-08 18:47:58.752806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.265 [2024-10-08 18:47:58.773757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.265 [2024-10-08 18:47:58.773787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.265 [2024-10-08 18:47:58.794144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.265 [2024-10-08 18:47:58.794211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.524 [2024-10-08 18:47:58.814304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.524 [2024-10-08 18:47:58.814372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.524 [2024-10-08 18:47:58.833763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.524 [2024-10-08 18:47:58.833803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.524 [2024-10-08 18:47:58.854339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.524 [2024-10-08 18:47:58.854406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.524 [2024-10-08 18:47:58.874943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.524 [2024-10-08 18:47:58.874998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.524 [2024-10-08 18:47:58.895019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.524 [2024-10-08 18:47:58.895092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.524 [2024-10-08 18:47:58.913790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.524 [2024-10-08 18:47:58.913820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.524 [2024-10-08 18:47:58.934748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.524 [2024-10-08 18:47:58.934778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.524 [2024-10-08 18:47:58.953501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.524 [2024-10-08 18:47:58.953567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.524 [2024-10-08 18:47:58.974815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.524 [2024-10-08 18:47:58.974845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.524 [2024-10-08 18:47:58.995292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.524 [2024-10-08 18:47:58.995359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.524 [2024-10-08 18:47:59.016617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.524 [2024-10-08 18:47:59.016703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.524 [2024-10-08 18:47:59.038332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.524 [2024-10-08 18:47:59.038401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.524 [2024-10-08 18:47:59.058258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.524 [2024-10-08 18:47:59.058326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.782 [2024-10-08 18:47:59.075852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.782 [2024-10-08 18:47:59.075882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.782 [2024-10-08 18:47:59.098002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.782 [2024-10-08 18:47:59.098069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.782 [2024-10-08 18:47:59.117852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.782 [2024-10-08 18:47:59.117882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.782 [2024-10-08 18:47:59.136234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.782 [2024-10-08 18:47:59.136301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.782 [2024-10-08 18:47:59.158669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.782 [2024-10-08 18:47:59.158749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.782 [2024-10-08 18:47:59.178763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.782 [2024-10-08 18:47:59.178793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.782 [2024-10-08 18:47:59.198760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.782 [2024-10-08 18:47:59.198790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.782 [2024-10-08 18:47:59.218836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.782 [2024-10-08 18:47:59.218875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.782 [2024-10-08 18:47:59.239025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.782 [2024-10-08 18:47:59.239098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.782 [2024-10-08 18:47:59.261387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.782 [2024-10-08 18:47:59.261454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.782 [2024-10-08 18:47:59.282962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.782 [2024-10-08 18:47:59.283029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:30.782 [2024-10-08 18:47:59.304205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:30.782 [2024-10-08 18:47:59.304273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.041 [2024-10-08 18:47:59.324717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.041 [2024-10-08 18:47:59.324748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.041 [2024-10-08 18:47:59.345780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.041 [2024-10-08 18:47:59.345810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.041 [2024-10-08 18:47:59.366756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.041 [2024-10-08 18:47:59.366787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.041 [2024-10-08 18:47:59.387814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.041 [2024-10-08 18:47:59.387881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.041 [2024-10-08 18:47:59.408623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.041 [2024-10-08 18:47:59.408714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.041 [2024-10-08 18:47:59.431132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.041 [2024-10-08 18:47:59.431199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.041 [2024-10-08 18:47:59.451056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.041 [2024-10-08 18:47:59.451129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.041 [2024-10-08 18:47:59.471169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.041 [2024-10-08 18:47:59.471236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.041 [2024-10-08 18:47:59.492475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.041 [2024-10-08 18:47:59.492542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.041 [2024-10-08 18:47:59.514335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.041 [2024-10-08 18:47:59.514401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.041 6775.25 IOPS, 52.93 MiB/s [2024-10-08T16:47:59.578Z] [2024-10-08 18:47:59.535212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.041 [2024-10-08 18:47:59.535279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.041 [2024-10-08 18:47:59.555750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.041 [2024-10-08 18:47:59.555780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.041 [2024-10-08 18:47:59.575879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.041 [2024-10-08 18:47:59.575909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.299 [2024-10-08 18:47:59.595817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.299 [2024-10-08 18:47:59.595847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.299 [2024-10-08 18:47:59.616633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.299 [2024-10-08 18:47:59.616722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.299 [2024-10-08 18:47:59.638932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.299 [2024-10-08 18:47:59.638995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.299 [2024-10-08 18:47:59.659038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.299 [2024-10-08 18:47:59.659114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.299 [2024-10-08 18:47:59.678047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.299 [2024-10-08 18:47:59.678114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.299 [2024-10-08 18:47:59.698416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.299 [2024-10-08 18:47:59.698482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.299 [2024-10-08 18:47:59.719500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.299 [2024-10-08 18:47:59.719577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.299 [2024-10-08 18:47:59.739891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.299 [2024-10-08 18:47:59.739928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.299 [2024-10-08 18:47:59.760474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.299 [2024-10-08 18:47:59.760541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.299 [2024-10-08 18:47:59.780412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.300 [2024-10-08 18:47:59.780479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.300 [2024-10-08 18:47:59.800444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.300 [2024-10-08 18:47:59.800513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.300 [2024-10-08 18:47:59.820861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.300 [2024-10-08 18:47:59.820891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:47:59.841850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:47:59.841881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:47:59.862512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:47:59.862581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:47:59.886368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:47:59.886434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:47:59.908927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:47:59.908993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:47:59.929792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:47:59.929822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:47:59.950901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:47:59.950982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:47:59.971470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:47:59.971537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:47:59.991553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:47:59.991619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:48:00.008566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:48:00.008604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:48:00.023204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:48:00.023250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:48:00.036271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:48:00.036309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:48:00.053225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:48:00.053261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:48:00.067247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:48:00.067313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.559 [2024-10-08 18:48:00.085784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.559 [2024-10-08 18:48:00.085814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.818 [2024-10-08 18:48:00.109394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.818 [2024-10-08 18:48:00.109462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.818 [2024-10-08 18:48:00.130995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.818 [2024-10-08 18:48:00.131065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.818 [2024-10-08 18:48:00.152038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.818 [2024-10-08 18:48:00.152106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.818 [2024-10-08 18:48:00.172410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.818 [2024-10-08 18:48:00.172441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.818 [2024-10-08 18:48:00.189816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.818 [2024-10-08 18:48:00.189847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.818 [2024-10-08 18:48:00.211764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.818 [2024-10-08 18:48:00.211794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.818 [2024-10-08 18:48:00.233138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.818 [2024-10-08 18:48:00.233207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.818 [2024-10-08 18:48:00.253966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.818 [2024-10-08 18:48:00.254047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.818 [2024-10-08 18:48:00.275792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.818 [2024-10-08 18:48:00.275823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.818 [2024-10-08 18:48:00.297177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.818 [2024-10-08 18:48:00.297245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.818 [2024-10-08 18:48:00.318862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.818 [2024-10-08 18:48:00.318893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:31.818 [2024-10-08 18:48:00.340489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:31.818 [2024-10-08 18:48:00.340556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.362123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.362190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.383903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.383933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.405329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.405397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.426806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.426836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.447305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.447372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.468838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.468868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.490300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.490368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.510905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.510935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.530762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.530793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 6683.40 IOPS, 52.21 MiB/s [2024-10-08T16:48:00.615Z] [2024-10-08 18:48:00.548073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.548139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 00:37:32.078 Latency(us) 00:37:32.078 [2024-10-08T16:48:00.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.078 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:37:32.078 Nvme1n1 : 5.02 6682.20 52.20 0.00 0.00 19116.24 2912.71 35535.08 00:37:32.078 [2024-10-08T16:48:00.615Z] =================================================================================================================== 00:37:32.078 [2024-10-08T16:48:00.615Z] Total : 6682.20 52.20 0.00 0.00 19116.24 2912.71 35535.08 00:37:32.078 [2024-10-08 18:48:00.559494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.559559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.571299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.571361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.583299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.583361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.595231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.595285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.603206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.603253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.078 [2024-10-08 18:48:00.611202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.078 [2024-10-08 18:48:00.611244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.619211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.619282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.627309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.627382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.635194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.635231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.647228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.647281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.659355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.659440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.667205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.667249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.679240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.679296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.687216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.687264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.695210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.695258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.703209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.703258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.711207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.711255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.719204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.719247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.727317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.727392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.739277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.739332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.751277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.751330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.763274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.763326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.775276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.775330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.787284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.787342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.795207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.795252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.803204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.803259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.815300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.815365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.823279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.823333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.831283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.831338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.839280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.839332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.847162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.847186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.855286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.855338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.863303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.863366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.337 [2024-10-08 18:48:00.871242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.337 [2024-10-08 18:48:00.871288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.595 [2024-10-08 18:48:00.879217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.595 [2024-10-08 18:48:00.879265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.595 [2024-10-08 18:48:00.887288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.595 [2024-10-08 18:48:00.887342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.595 [2024-10-08 18:48:00.895279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.595 [2024-10-08 18:48:00.895333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.595 [2024-10-08 18:48:00.903281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.595 [2024-10-08 18:48:00.903334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.595 [2024-10-08 18:48:00.911279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.595 [2024-10-08 18:48:00.911331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.595 [2024-10-08 18:48:00.919276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:32.595 [2024-10-08 18:48:00.919332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:32.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1384110) - No such process 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1384110 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:32.595 delay0 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.595 18:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:37:32.595 [2024-10-08 18:48:01.041260] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:40.707 Initializing NVMe Controllers 00:37:40.707 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:40.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:40.707 Initialization complete. Launching workers. 00:37:40.707 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 225, failed: 13562 00:37:40.707 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13637, failed to submit 150 00:37:40.707 success 13574, unsuccessful 63, failed 0 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:40.707 rmmod nvme_tcp 00:37:40.707 rmmod nvme_fabrics 00:37:40.707 rmmod nvme_keyring 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1382808 ']' 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1382808 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1382808 ']' 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1382808 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1382808 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1382808' 00:37:40.707 killing process with pid 1382808 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1382808 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1382808 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:40.707 18:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:42.612 18:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:42.612 00:37:42.612 real 0m30.954s 00:37:42.612 user 0m41.770s 00:37:42.612 sys 0m11.842s 00:37:42.612 18:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:42.612 18:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:42.612 ************************************ 00:37:42.612 END TEST nvmf_zcopy 00:37:42.612 ************************************ 00:37:42.612 18:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:37:42.612 18:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:42.612 18:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:42.612 18:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:42.612 ************************************ 00:37:42.612 START TEST nvmf_nmic 00:37:42.612 ************************************ 00:37:42.612 18:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:37:42.612 * Looking for test storage... 00:37:42.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:42.612 18:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:42.612 18:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:37:42.612 18:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:42.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.612 --rc genhtml_branch_coverage=1 00:37:42.612 --rc genhtml_function_coverage=1 00:37:42.612 --rc genhtml_legend=1 00:37:42.612 --rc geninfo_all_blocks=1 00:37:42.612 --rc geninfo_unexecuted_blocks=1 00:37:42.612 00:37:42.612 ' 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:42.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.612 --rc genhtml_branch_coverage=1 00:37:42.612 --rc genhtml_function_coverage=1 00:37:42.612 --rc genhtml_legend=1 00:37:42.612 --rc geninfo_all_blocks=1 00:37:42.612 --rc geninfo_unexecuted_blocks=1 00:37:42.612 00:37:42.612 ' 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:42.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.612 --rc genhtml_branch_coverage=1 00:37:42.612 --rc genhtml_function_coverage=1 00:37:42.612 --rc genhtml_legend=1 00:37:42.612 --rc geninfo_all_blocks=1 00:37:42.612 --rc geninfo_unexecuted_blocks=1 00:37:42.612 00:37:42.612 ' 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:42.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.612 --rc genhtml_branch_coverage=1 00:37:42.612 --rc genhtml_function_coverage=1 00:37:42.612 --rc genhtml_legend=1 00:37:42.612 --rc geninfo_all_blocks=1 00:37:42.612 --rc geninfo_unexecuted_blocks=1 00:37:42.612 00:37:42.612 ' 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:42.612 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:37:42.613 18:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:45.145 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:45.145 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:45.145 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:45.146 Found net devices under 0000:84:00.0: cvl_0_0 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:45.146 Found net devices under 0000:84:00.1: cvl_0_1 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:45.146 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:45.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:45.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:37:45.405 00:37:45.405 --- 10.0.0.2 ping statistics --- 00:37:45.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.405 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:45.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:45.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:37:45.405 00:37:45.405 --- 10.0.0.1 ping statistics --- 00:37:45.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.405 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1388245 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1388245 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1388245 ']' 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:45.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:45.405 18:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:45.405 [2024-10-08 18:48:13.869156] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:45.405 [2024-10-08 18:48:13.870440] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:37:45.405 [2024-10-08 18:48:13.870512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:45.664 [2024-10-08 18:48:13.976513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:45.664 [2024-10-08 18:48:14.198368] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:45.664 [2024-10-08 18:48:14.198490] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:45.664 [2024-10-08 18:48:14.198528] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:45.664 [2024-10-08 18:48:14.198559] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:45.664 [2024-10-08 18:48:14.198587] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:45.922 [2024-10-08 18:48:14.201916] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:45.922 [2024-10-08 18:48:14.201993] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:37:45.922 [2024-10-08 18:48:14.202083] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:37:45.922 [2024-10-08 18:48:14.202088] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:45.922 [2024-10-08 18:48:14.395511] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:45.922 [2024-10-08 18:48:14.395915] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:45.922 [2024-10-08 18:48:14.396174] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:45.922 [2024-10-08 18:48:14.396870] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:45.922 [2024-10-08 18:48:14.397407] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:45.922 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:45.922 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:37:45.922 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:45.922 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:45.922 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:46.180 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:46.180 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:46.181 [2024-10-08 18:48:14.495125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:46.181 Malloc0 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:46.181 [2024-10-08 18:48:14.575234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:37:46.181 test case1: single bdev can't be used in multiple subsystems 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:46.181 [2024-10-08 18:48:14.598954] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:37:46.181 [2024-10-08 18:48:14.598989] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:37:46.181 [2024-10-08 18:48:14.599006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:46.181 request: 00:37:46.181 { 00:37:46.181 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:37:46.181 "namespace": { 00:37:46.181 "bdev_name": "Malloc0", 00:37:46.181 "no_auto_visible": false 00:37:46.181 }, 00:37:46.181 "method": "nvmf_subsystem_add_ns", 00:37:46.181 "req_id": 1 00:37:46.181 } 00:37:46.181 Got JSON-RPC error response 00:37:46.181 response: 00:37:46.181 { 00:37:46.181 "code": -32602, 00:37:46.181 "message": "Invalid parameters" 00:37:46.181 } 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:37:46.181 Adding namespace failed - expected result. 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:37:46.181 test case2: host connect to nvmf target in multiple paths 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:46.181 [2024-10-08 18:48:14.607048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.181 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:46.439 18:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:37:46.699 18:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:37:46.699 18:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:37:46.699 18:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:37:46.699 18:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:37:46.699 18:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:37:48.607 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:37:48.607 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:37:48.607 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:37:48.607 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:37:48.607 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:37:48.607 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:37:48.607 18:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:37:48.607 [global] 00:37:48.607 thread=1 00:37:48.607 invalidate=1 00:37:48.607 rw=write 00:37:48.607 time_based=1 00:37:48.607 runtime=1 00:37:48.607 ioengine=libaio 00:37:48.607 direct=1 00:37:48.607 bs=4096 00:37:48.607 iodepth=1 00:37:48.607 norandommap=0 00:37:48.607 numjobs=1 00:37:48.607 00:37:48.607 verify_dump=1 00:37:48.607 verify_backlog=512 00:37:48.607 verify_state_save=0 00:37:48.607 do_verify=1 00:37:48.607 verify=crc32c-intel 00:37:48.607 [job0] 00:37:48.607 filename=/dev/nvme0n1 00:37:48.607 Could not set queue depth (nvme0n1) 00:37:48.866 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:48.866 fio-3.35 00:37:48.866 Starting 1 thread 00:37:50.243 00:37:50.243 job0: (groupid=0, jobs=1): err= 0: pid=1388717: Tue Oct 8 18:48:18 2024 00:37:50.243 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:37:50.243 slat (nsec): min=9110, max=20748, avg=14310.09, stdev=1840.49 00:37:50.243 clat (usec): min=40587, max=41177, avg=40968.18, stdev=100.14 00:37:50.243 lat (usec): min=40596, max=41192, avg=40982.49, stdev=101.23 00:37:50.243 clat percentiles (usec): 00:37:50.243 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:50.243 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:50.243 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:50.243 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:50.243 | 99.99th=[41157] 00:37:50.243 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:37:50.243 slat (usec): min=10, max=28107, avg=65.94, stdev=1241.71 00:37:50.243 clat (usec): min=148, max=268, avg=163.78, stdev=14.84 00:37:50.243 lat (usec): min=158, max=28375, avg=229.72, stdev=1246.42 00:37:50.243 clat percentiles (usec): 00:37:50.243 | 1.00th=[ 149], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 153], 00:37:50.243 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 157], 60.00th=[ 161], 00:37:50.243 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 194], 00:37:50.243 | 99.00th=[ 217], 99.50th=[ 229], 99.90th=[ 269], 99.95th=[ 269], 00:37:50.243 | 99.99th=[ 269] 00:37:50.243 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:37:50.243 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:50.243 lat (usec) : 250=95.69%, 500=0.19% 00:37:50.243 lat (msec) : 50=4.12% 00:37:50.243 cpu : usr=0.29%, sys=0.69%, ctx=536, majf=0, minf=1 00:37:50.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:50.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.243 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:50.243 00:37:50.243 Run status group 0 (all jobs): 00:37:50.243 READ: bw=86.1KiB/s (88.2kB/s), 86.1KiB/s-86.1KiB/s (88.2kB/s-88.2kB/s), io=88.0KiB (90.1kB), run=1022-1022msec 00:37:50.243 WRITE: bw=2004KiB/s (2052kB/s), 2004KiB/s-2004KiB/s (2052kB/s-2052kB/s), io=2048KiB (2097kB), run=1022-1022msec 00:37:50.243 00:37:50.243 Disk stats (read/write): 00:37:50.244 nvme0n1: ios=45/512, merge=0/0, ticks=1763/81, in_queue=1844, util=98.60% 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:50.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:50.244 rmmod nvme_tcp 00:37:50.244 rmmod nvme_fabrics 00:37:50.244 rmmod nvme_keyring 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1388245 ']' 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1388245 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1388245 ']' 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1388245 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1388245 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1388245' 00:37:50.244 killing process with pid 1388245 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1388245 00:37:50.244 18:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1388245 00:37:50.814 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:50.814 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:50.814 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:50.814 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:37:50.814 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:37:50.814 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:50.814 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:37:50.814 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:50.814 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:50.814 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.814 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:50.814 18:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:52.722 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:52.722 00:37:52.722 real 0m10.325s 00:37:52.722 user 0m18.144s 00:37:52.722 sys 0m3.997s 00:37:52.722 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:52.722 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:52.722 ************************************ 00:37:52.722 END TEST nvmf_nmic 00:37:52.722 ************************************ 00:37:52.722 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:37:52.722 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:52.722 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:52.722 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:52.722 ************************************ 00:37:52.722 START TEST nvmf_fio_target 00:37:52.722 ************************************ 00:37:52.722 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:37:52.981 * Looking for test storage... 00:37:52.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:52.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.981 --rc genhtml_branch_coverage=1 00:37:52.981 --rc genhtml_function_coverage=1 00:37:52.981 --rc genhtml_legend=1 00:37:52.981 --rc geninfo_all_blocks=1 00:37:52.981 --rc geninfo_unexecuted_blocks=1 00:37:52.981 00:37:52.981 ' 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:52.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.981 --rc genhtml_branch_coverage=1 00:37:52.981 --rc genhtml_function_coverage=1 00:37:52.981 --rc genhtml_legend=1 00:37:52.981 --rc geninfo_all_blocks=1 00:37:52.981 --rc geninfo_unexecuted_blocks=1 00:37:52.981 00:37:52.981 ' 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:52.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.981 --rc genhtml_branch_coverage=1 00:37:52.981 --rc genhtml_function_coverage=1 00:37:52.981 --rc genhtml_legend=1 00:37:52.981 --rc geninfo_all_blocks=1 00:37:52.981 --rc geninfo_unexecuted_blocks=1 00:37:52.981 00:37:52.981 ' 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:52.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.981 --rc genhtml_branch_coverage=1 00:37:52.981 --rc genhtml_function_coverage=1 00:37:52.981 --rc genhtml_legend=1 00:37:52.981 --rc geninfo_all_blocks=1 00:37:52.981 --rc geninfo_unexecuted_blocks=1 00:37:52.981 00:37:52.981 ' 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.981 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:37:52.982 18:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:56.273 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:37:56.274 Found 0000:84:00.0 (0x8086 - 0x159b) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:37:56.274 Found 0000:84:00.1 (0x8086 - 0x159b) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:37:56.274 Found net devices under 0000:84:00.0: cvl_0_0 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:37:56.274 Found net devices under 0000:84:00.1: cvl_0_1 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:56.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:56.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:37:56.274 00:37:56.274 --- 10.0.0.2 ping statistics --- 00:37:56.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.274 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:56.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:56.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:37:56.274 00:37:56.274 --- 10.0.0.1 ping statistics --- 00:37:56.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.274 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1391047 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1391047 00:37:56.274 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1391047 ']' 00:37:56.275 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.275 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:56.275 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.275 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:56.275 18:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:56.275 [2024-10-08 18:48:24.703129] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:56.275 [2024-10-08 18:48:24.705894] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:37:56.275 [2024-10-08 18:48:24.706012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:56.578 [2024-10-08 18:48:24.865923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:56.851 [2024-10-08 18:48:25.088508] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:56.851 [2024-10-08 18:48:25.088616] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:56.851 [2024-10-08 18:48:25.088671] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:56.851 [2024-10-08 18:48:25.088706] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:56.851 [2024-10-08 18:48:25.088734] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:56.851 [2024-10-08 18:48:25.092031] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:56.851 [2024-10-08 18:48:25.092132] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:37:56.851 [2024-10-08 18:48:25.092226] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:37:56.851 [2024-10-08 18:48:25.092229] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:56.851 [2024-10-08 18:48:25.251769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:56.851 [2024-10-08 18:48:25.252072] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:56.851 [2024-10-08 18:48:25.252413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:56.851 [2024-10-08 18:48:25.253558] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:56.851 [2024-10-08 18:48:25.253940] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:56.851 18:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:56.851 18:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:37:56.851 18:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:56.851 18:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:56.851 18:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:56.851 18:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:56.851 18:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:57.109 [2024-10-08 18:48:25.641146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:57.367 18:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:57.932 18:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:37:57.932 18:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:58.498 18:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:37:58.498 18:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:59.063 18:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:37:59.063 18:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:59.321 18:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:37:59.321 18:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:37:59.886 18:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:00.819 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:38:00.819 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:01.385 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:38:01.385 18:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:01.952 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:38:01.952 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:38:02.522 18:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:03.460 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:03.460 18:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:04.028 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:38:04.028 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:04.286 18:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:04.546 [2024-10-08 18:48:32.985334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:04.546 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:38:05.115 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:38:05.685 18:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:05.685 18:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:38:05.685 18:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:38:05.685 18:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:38:05.685 18:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:38:05.685 18:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:38:05.685 18:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:38:08.220 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:38:08.220 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:38:08.220 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:38:08.220 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:38:08.220 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:38:08.220 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:38:08.220 18:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:08.220 [global] 00:38:08.220 thread=1 00:38:08.220 invalidate=1 00:38:08.220 rw=write 00:38:08.220 time_based=1 00:38:08.220 runtime=1 00:38:08.220 ioengine=libaio 00:38:08.220 direct=1 00:38:08.220 bs=4096 00:38:08.220 iodepth=1 00:38:08.220 norandommap=0 00:38:08.220 numjobs=1 00:38:08.220 00:38:08.220 verify_dump=1 00:38:08.220 verify_backlog=512 00:38:08.220 verify_state_save=0 00:38:08.220 do_verify=1 00:38:08.220 verify=crc32c-intel 00:38:08.220 [job0] 00:38:08.220 filename=/dev/nvme0n1 00:38:08.220 [job1] 00:38:08.220 filename=/dev/nvme0n2 00:38:08.220 [job2] 00:38:08.220 filename=/dev/nvme0n3 00:38:08.220 [job3] 00:38:08.220 filename=/dev/nvme0n4 00:38:08.220 Could not set queue depth (nvme0n1) 00:38:08.220 Could not set queue depth (nvme0n2) 00:38:08.220 Could not set queue depth (nvme0n3) 00:38:08.220 Could not set queue depth (nvme0n4) 00:38:08.220 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:08.220 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:08.220 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:08.220 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:08.220 fio-3.35 00:38:08.220 Starting 4 threads 00:38:09.159 00:38:09.159 job0: (groupid=0, jobs=1): err= 0: pid=1392508: Tue Oct 8 18:48:37 2024 00:38:09.159 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:38:09.159 slat (nsec): min=7079, max=38825, avg=21978.36, stdev=9334.31 00:38:09.159 clat (usec): min=34973, max=42006, avg=40725.27, stdev=1308.01 00:38:09.159 lat (usec): min=34990, max=42023, avg=40747.25, stdev=1308.92 00:38:09.159 clat percentiles (usec): 00:38:09.159 | 1.00th=[34866], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:38:09.159 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:09.159 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:09.159 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:09.159 | 99.99th=[42206] 00:38:09.159 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:38:09.159 slat (nsec): min=7737, max=71929, avg=10144.24, stdev=3747.38 00:38:09.159 clat (usec): min=134, max=417, avg=240.47, stdev=49.54 00:38:09.159 lat (usec): min=142, max=427, avg=250.61, stdev=49.99 00:38:09.159 clat percentiles (usec): 00:38:09.159 | 1.00th=[ 145], 5.00th=[ 163], 10.00th=[ 208], 20.00th=[ 217], 00:38:09.159 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 237], 00:38:09.159 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 310], 95.00th=[ 367], 00:38:09.159 | 99.00th=[ 412], 99.50th=[ 416], 99.90th=[ 416], 99.95th=[ 416], 00:38:09.159 | 99.99th=[ 416] 00:38:09.159 bw ( KiB/s): min= 4096, max= 4096, per=51.35%, avg=4096.00, stdev= 0.00, samples=1 00:38:09.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:09.159 lat (usec) : 250=78.28%, 500=17.60% 00:38:09.159 lat (msec) : 50=4.12% 00:38:09.159 cpu : usr=0.19%, sys=0.78%, ctx=534, majf=0, minf=2 00:38:09.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.159 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:09.159 job1: (groupid=0, jobs=1): err= 0: pid=1392509: Tue Oct 8 18:48:37 2024 00:38:09.159 read: IOPS=23, BW=93.7KiB/s (95.9kB/s)(96.0KiB/1025msec) 00:38:09.159 slat (nsec): min=9826, max=34362, avg=17847.92, stdev=5830.96 00:38:09.159 clat (usec): min=455, max=41101, avg=37551.13, stdev=11424.08 00:38:09.159 lat (usec): min=472, max=41118, avg=37568.98, stdev=11424.36 00:38:09.159 clat percentiles (usec): 00:38:09.159 | 1.00th=[ 457], 5.00th=[ 469], 10.00th=[40633], 20.00th=[40633], 00:38:09.159 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:09.159 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:09.159 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:09.159 | 99.99th=[41157] 00:38:09.159 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:38:09.159 slat (nsec): min=6520, max=25622, avg=9909.64, stdev=2105.68 00:38:09.159 clat (usec): min=157, max=375, avg=228.79, stdev=15.62 00:38:09.159 lat (usec): min=166, max=384, avg=238.70, stdev=15.84 00:38:09.159 clat percentiles (usec): 00:38:09.159 | 1.00th=[ 198], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 217], 00:38:09.159 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 00:38:09.159 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 249], 00:38:09.159 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 375], 99.95th=[ 375], 00:38:09.159 | 99.99th=[ 375] 00:38:09.159 bw ( KiB/s): min= 4096, max= 4096, per=51.35%, avg=4096.00, stdev= 0.00, samples=1 00:38:09.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:09.159 lat (usec) : 250=91.23%, 500=4.66% 00:38:09.159 lat (msec) : 50=4.10% 00:38:09.159 cpu : usr=0.00%, sys=0.68%, ctx=536, majf=0, minf=2 00:38:09.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.159 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:09.159 job2: (groupid=0, jobs=1): err= 0: pid=1392510: Tue Oct 8 18:48:37 2024 00:38:09.159 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:38:09.159 slat (nsec): min=8555, max=25784, avg=15269.36, stdev=3557.92 00:38:09.159 clat (usec): min=40568, max=41080, avg=40956.11, stdev=98.33 00:38:09.159 lat (usec): min=40576, max=41096, avg=40971.38, stdev=99.45 00:38:09.159 clat percentiles (usec): 00:38:09.159 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:38:09.159 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:09.159 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:09.159 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:09.159 | 99.99th=[41157] 00:38:09.159 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:38:09.159 slat (nsec): min=9776, max=26976, avg=10634.63, stdev=1512.12 00:38:09.159 clat (usec): min=196, max=384, avg=228.24, stdev=16.00 00:38:09.159 lat (usec): min=206, max=395, avg=238.88, stdev=16.17 00:38:09.159 clat percentiles (usec): 00:38:09.159 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 215], 00:38:09.159 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:38:09.159 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 251], 00:38:09.159 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 383], 99.95th=[ 383], 00:38:09.159 | 99.99th=[ 383] 00:38:09.159 bw ( KiB/s): min= 4096, max= 4096, per=51.35%, avg=4096.00, stdev= 0.00, samples=1 00:38:09.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:09.159 lat (usec) : 250=90.82%, 500=5.06% 00:38:09.159 lat (msec) : 50=4.12% 00:38:09.159 cpu : usr=0.29%, sys=0.78%, ctx=535, majf=0, minf=1 00:38:09.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.159 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:09.159 job3: (groupid=0, jobs=1): err= 0: pid=1392511: Tue Oct 8 18:48:37 2024 00:38:09.159 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:38:09.159 slat (nsec): min=8930, max=16221, avg=14191.23, stdev=1474.83 00:38:09.159 clat (usec): min=40682, max=41065, avg=40969.81, stdev=74.01 00:38:09.159 lat (usec): min=40691, max=41080, avg=40984.00, stdev=74.96 00:38:09.159 clat percentiles (usec): 00:38:09.159 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:09.159 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:09.159 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:09.159 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:09.159 | 99.99th=[41157] 00:38:09.159 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:38:09.159 slat (nsec): min=9927, max=42978, avg=11327.64, stdev=2045.31 00:38:09.159 clat (usec): min=164, max=1114, avg=229.02, stdev=45.30 00:38:09.159 lat (usec): min=174, max=1127, avg=240.35, stdev=45.55 00:38:09.159 clat percentiles (usec): 00:38:09.159 | 1.00th=[ 174], 5.00th=[ 196], 10.00th=[ 206], 20.00th=[ 212], 00:38:09.159 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:38:09.160 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 260], 00:38:09.160 | 99.00th=[ 310], 99.50th=[ 347], 99.90th=[ 1123], 99.95th=[ 1123], 00:38:09.160 | 99.99th=[ 1123] 00:38:09.160 bw ( KiB/s): min= 4096, max= 4096, per=51.35%, avg=4096.00, stdev= 0.00, samples=1 00:38:09.160 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:09.160 lat (usec) : 250=86.89%, 500=8.80% 00:38:09.160 lat (msec) : 2=0.19%, 50=4.12% 00:38:09.160 cpu : usr=0.49%, sys=0.58%, ctx=536, majf=0, minf=1 00:38:09.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:09.160 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:09.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:09.160 00:38:09.160 Run status group 0 (all jobs): 00:38:09.160 READ: bw=351KiB/s (359kB/s), 85.7KiB/s-93.7KiB/s (87.7kB/s-95.9kB/s), io=360KiB (369kB), run=1025-1027msec 00:38:09.160 WRITE: bw=7977KiB/s (8168kB/s), 1994KiB/s-1998KiB/s (2042kB/s-2046kB/s), io=8192KiB (8389kB), run=1025-1027msec 00:38:09.160 00:38:09.160 Disk stats (read/write): 00:38:09.160 nvme0n1: ios=67/512, merge=0/0, ticks=724/124, in_queue=848, util=86.87% 00:38:09.160 nvme0n2: ios=68/512, merge=0/0, ticks=758/117, in_queue=875, util=90.85% 00:38:09.160 nvme0n3: ios=43/512, merge=0/0, ticks=1602/116, in_queue=1718, util=93.74% 00:38:09.160 nvme0n4: ios=74/512, merge=0/0, ticks=862/113, in_queue=975, util=94.43% 00:38:09.160 18:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:38:09.418 [global] 00:38:09.418 thread=1 00:38:09.418 invalidate=1 00:38:09.418 rw=randwrite 00:38:09.418 time_based=1 00:38:09.418 runtime=1 00:38:09.418 ioengine=libaio 00:38:09.418 direct=1 00:38:09.418 bs=4096 00:38:09.418 iodepth=1 00:38:09.418 norandommap=0 00:38:09.418 numjobs=1 00:38:09.418 00:38:09.418 verify_dump=1 00:38:09.418 verify_backlog=512 00:38:09.418 verify_state_save=0 00:38:09.418 do_verify=1 00:38:09.418 verify=crc32c-intel 00:38:09.418 [job0] 00:38:09.418 filename=/dev/nvme0n1 00:38:09.418 [job1] 00:38:09.418 filename=/dev/nvme0n2 00:38:09.418 [job2] 00:38:09.418 filename=/dev/nvme0n3 00:38:09.418 [job3] 00:38:09.418 filename=/dev/nvme0n4 00:38:09.418 Could not set queue depth (nvme0n1) 00:38:09.418 Could not set queue depth (nvme0n2) 00:38:09.418 Could not set queue depth (nvme0n3) 00:38:09.418 Could not set queue depth (nvme0n4) 00:38:09.418 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:09.418 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:09.418 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:09.418 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:09.418 fio-3.35 00:38:09.418 Starting 4 threads 00:38:10.790 00:38:10.790 job0: (groupid=0, jobs=1): err= 0: pid=1392738: Tue Oct 8 18:48:39 2024 00:38:10.790 read: IOPS=22, BW=91.4KiB/s (93.6kB/s)(92.0KiB/1007msec) 00:38:10.790 slat (nsec): min=8863, max=16663, avg=12572.09, stdev=1866.74 00:38:10.790 clat (usec): min=377, max=42041, avg=38299.40, stdev=9486.23 00:38:10.790 lat (usec): min=388, max=42051, avg=38311.97, stdev=9486.55 00:38:10.790 clat percentiles (usec): 00:38:10.790 | 1.00th=[ 379], 5.00th=[18744], 10.00th=[40633], 20.00th=[41157], 00:38:10.790 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:10.790 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:38:10.790 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:10.790 | 99.99th=[42206] 00:38:10.790 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:38:10.790 slat (nsec): min=9055, max=42644, avg=11605.93, stdev=3780.32 00:38:10.790 clat (usec): min=151, max=459, avg=230.35, stdev=32.83 00:38:10.790 lat (usec): min=161, max=489, avg=241.96, stdev=33.34 00:38:10.790 clat percentiles (usec): 00:38:10.790 | 1.00th=[ 161], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 206], 00:38:10.790 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:38:10.790 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 273], 00:38:10.790 | 99.00th=[ 351], 99.50th=[ 375], 99.90th=[ 461], 99.95th=[ 461], 00:38:10.790 | 99.99th=[ 461] 00:38:10.790 bw ( KiB/s): min= 4096, max= 4096, per=21.13%, avg=4096.00, stdev= 0.00, samples=1 00:38:10.790 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:10.790 lat (usec) : 250=82.24%, 500=13.64% 00:38:10.790 lat (msec) : 20=0.19%, 50=3.93% 00:38:10.790 cpu : usr=0.20%, sys=0.80%, ctx=535, majf=0, minf=1 00:38:10.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:10.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:10.790 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:10.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:10.790 job1: (groupid=0, jobs=1): err= 0: pid=1392742: Tue Oct 8 18:48:39 2024 00:38:10.790 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:38:10.790 slat (nsec): min=9512, max=36942, avg=17898.77, stdev=6693.90 00:38:10.790 clat (usec): min=40386, max=41979, avg=41088.19, stdev=378.53 00:38:10.790 lat (usec): min=40396, max=41994, avg=41106.08, stdev=378.06 00:38:10.790 clat percentiles (usec): 00:38:10.790 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:10.790 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:10.790 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:38:10.790 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:10.790 | 99.99th=[42206] 00:38:10.790 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:38:10.790 slat (nsec): min=9900, max=44079, avg=12420.75, stdev=4039.18 00:38:10.790 clat (usec): min=157, max=383, avg=202.10, stdev=38.74 00:38:10.790 lat (usec): min=167, max=400, avg=214.52, stdev=39.79 00:38:10.790 clat percentiles (usec): 00:38:10.790 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:38:10.790 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 192], 00:38:10.790 | 70.00th=[ 225], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 262], 00:38:10.790 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 383], 99.95th=[ 383], 00:38:10.790 | 99.99th=[ 383] 00:38:10.790 bw ( KiB/s): min= 4096, max= 4096, per=21.13%, avg=4096.00, stdev= 0.00, samples=1 00:38:10.790 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:10.790 lat (usec) : 250=87.27%, 500=8.61% 00:38:10.790 lat (msec) : 50=4.12% 00:38:10.790 cpu : usr=0.30%, sys=0.59%, ctx=535, majf=0, minf=1 00:38:10.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:10.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:10.790 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:10.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:10.790 job2: (groupid=0, jobs=1): err= 0: pid=1392743: Tue Oct 8 18:48:39 2024 00:38:10.790 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:38:10.790 slat (nsec): min=6792, max=37145, avg=8681.32, stdev=2076.54 00:38:10.790 clat (usec): min=204, max=1105, avg=246.69, stdev=41.73 00:38:10.790 lat (usec): min=213, max=1117, avg=255.37, stdev=42.25 00:38:10.790 clat percentiles (usec): 00:38:10.790 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 227], 00:38:10.790 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:38:10.790 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 281], 00:38:10.790 | 99.00th=[ 469], 99.50th=[ 494], 99.90th=[ 510], 99.95th=[ 1012], 00:38:10.790 | 99.99th=[ 1106] 00:38:10.790 write: IOPS=2360, BW=9443KiB/s (9669kB/s)(9452KiB/1001msec); 0 zone resets 00:38:10.790 slat (nsec): min=8347, max=33996, avg=10892.14, stdev=2774.60 00:38:10.790 clat (usec): min=144, max=904, avg=185.87, stdev=32.79 00:38:10.790 lat (usec): min=155, max=915, avg=196.77, stdev=33.07 00:38:10.790 clat percentiles (usec): 00:38:10.790 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:38:10.790 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:38:10.790 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 221], 95.00th=[ 235], 00:38:10.790 | 99.00th=[ 273], 99.50th=[ 322], 99.90th=[ 594], 99.95th=[ 652], 00:38:10.790 | 99.99th=[ 906] 00:38:10.790 bw ( KiB/s): min= 8848, max= 8848, per=45.65%, avg=8848.00, stdev= 0.00, samples=1 00:38:10.790 iops : min= 2212, max= 2212, avg=2212.00, stdev= 0.00, samples=1 00:38:10.790 lat (usec) : 250=85.13%, 500=14.62%, 750=0.18%, 1000=0.02% 00:38:10.790 lat (msec) : 2=0.05% 00:38:10.790 cpu : usr=2.00%, sys=7.10%, ctx=4411, majf=0, minf=1 00:38:10.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:10.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:10.790 issued rwts: total=2048,2363,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:10.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:10.790 job3: (groupid=0, jobs=1): err= 0: pid=1392744: Tue Oct 8 18:48:39 2024 00:38:10.790 read: IOPS=1187, BW=4749KiB/s (4863kB/s)(4768KiB/1004msec) 00:38:10.790 slat (nsec): min=7892, max=45160, avg=10551.22, stdev=4438.11 00:38:10.790 clat (usec): min=207, max=41292, avg=547.94, stdev=3329.39 00:38:10.790 lat (usec): min=216, max=41305, avg=558.49, stdev=3329.94 00:38:10.790 clat percentiles (usec): 00:38:10.790 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:38:10.790 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:38:10.790 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 330], 00:38:10.790 | 99.00th=[ 383], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:10.790 | 99.99th=[41157] 00:38:10.790 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:38:10.790 slat (nsec): min=9986, max=56288, avg=12795.59, stdev=4316.03 00:38:10.790 clat (usec): min=162, max=528, avg=201.35, stdev=23.46 00:38:10.790 lat (usec): min=177, max=541, avg=214.14, stdev=24.36 00:38:10.790 clat percentiles (usec): 00:38:10.790 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:38:10.790 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:38:10.790 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 241], 00:38:10.790 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 351], 99.95th=[ 529], 00:38:10.790 | 99.99th=[ 529] 00:38:10.790 bw ( KiB/s): min= 4096, max= 8192, per=31.70%, avg=6144.00, stdev=2896.31, samples=2 00:38:10.790 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:38:10.790 lat (usec) : 250=61.55%, 500=38.12%, 750=0.04% 00:38:10.790 lat (msec) : 50=0.29% 00:38:10.790 cpu : usr=1.40%, sys=3.99%, ctx=2730, majf=0, minf=1 00:38:10.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:10.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:10.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:10.790 issued rwts: total=1192,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:10.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:10.790 00:38:10.790 Run status group 0 (all jobs): 00:38:10.790 READ: bw=12.6MiB/s (13.2MB/s), 86.6KiB/s-8184KiB/s (88.7kB/s-8380kB/s), io=12.8MiB (13.5MB), run=1001-1016msec 00:38:10.790 WRITE: bw=18.9MiB/s (19.8MB/s), 2016KiB/s-9443KiB/s (2064kB/s-9669kB/s), io=19.2MiB (20.2MB), run=1001-1016msec 00:38:10.790 00:38:10.790 Disk stats (read/write): 00:38:10.790 nvme0n1: ios=68/512, merge=0/0, ticks=853/111, in_queue=964, util=90.48% 00:38:10.790 nvme0n2: ios=67/512, merge=0/0, ticks=880/100, in_queue=980, util=97.45% 00:38:10.790 nvme0n3: ios=1641/2048, merge=0/0, ticks=394/379, in_queue=773, util=88.67% 00:38:10.790 nvme0n4: ios=1228/1536, merge=0/0, ticks=1202/298, in_queue=1500, util=97.88% 00:38:10.790 18:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:38:10.790 [global] 00:38:10.790 thread=1 00:38:10.790 invalidate=1 00:38:10.790 rw=write 00:38:10.790 time_based=1 00:38:10.790 runtime=1 00:38:10.790 ioengine=libaio 00:38:10.790 direct=1 00:38:10.790 bs=4096 00:38:10.790 iodepth=128 00:38:10.790 norandommap=0 00:38:10.790 numjobs=1 00:38:10.790 00:38:10.790 verify_dump=1 00:38:10.790 verify_backlog=512 00:38:10.790 verify_state_save=0 00:38:10.790 do_verify=1 00:38:10.790 verify=crc32c-intel 00:38:10.790 [job0] 00:38:10.790 filename=/dev/nvme0n1 00:38:10.790 [job1] 00:38:10.790 filename=/dev/nvme0n2 00:38:10.790 [job2] 00:38:10.790 filename=/dev/nvme0n3 00:38:10.790 [job3] 00:38:10.790 filename=/dev/nvme0n4 00:38:10.790 Could not set queue depth (nvme0n1) 00:38:10.790 Could not set queue depth (nvme0n2) 00:38:10.790 Could not set queue depth (nvme0n3) 00:38:10.790 Could not set queue depth (nvme0n4) 00:38:11.049 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:11.049 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:11.049 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:11.049 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:11.049 fio-3.35 00:38:11.049 Starting 4 threads 00:38:12.423 00:38:12.423 job0: (groupid=0, jobs=1): err= 0: pid=1392966: Tue Oct 8 18:48:40 2024 00:38:12.423 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:38:12.423 slat (usec): min=3, max=11411, avg=117.00, stdev=701.13 00:38:12.423 clat (usec): min=6142, max=73680, avg=15460.82, stdev=6953.23 00:38:12.423 lat (usec): min=6568, max=76087, avg=15577.82, stdev=6997.65 00:38:12.423 clat percentiles (usec): 00:38:12.423 | 1.00th=[ 7701], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10683], 00:38:12.423 | 30.00th=[11600], 40.00th=[13042], 50.00th=[14091], 60.00th=[15795], 00:38:12.423 | 70.00th=[17171], 80.00th=[18482], 90.00th=[20841], 95.00th=[25035], 00:38:12.423 | 99.00th=[43254], 99.50th=[54264], 99.90th=[73925], 99.95th=[73925], 00:38:12.423 | 99.99th=[73925] 00:38:12.423 write: IOPS=3602, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1012msec); 0 zone resets 00:38:12.424 slat (usec): min=5, max=11607, avg=152.89, stdev=838.24 00:38:12.424 clat (msec): min=5, max=118, avg=19.93, stdev=17.55 00:38:12.424 lat (msec): min=5, max=118, avg=20.09, stdev=17.66 00:38:12.424 clat percentiles (msec): 00:38:12.424 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:38:12.424 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 17], 00:38:12.424 | 70.00th=[ 18], 80.00th=[ 23], 90.00th=[ 26], 95.00th=[ 57], 00:38:12.424 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 120], 99.95th=[ 120], 00:38:12.424 | 99.99th=[ 120] 00:38:12.424 bw ( KiB/s): min= 8272, max=20400, per=22.08%, avg=14336.00, stdev=8575.79, samples=2 00:38:12.424 iops : min= 2068, max= 5100, avg=3584.00, stdev=2143.95, samples=2 00:38:12.424 lat (msec) : 10=9.61%, 20=69.67%, 50=17.21%, 100=2.75%, 250=0.76% 00:38:12.424 cpu : usr=4.25%, sys=4.35%, ctx=402, majf=0, minf=1 00:38:12.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:38:12.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:12.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:12.424 issued rwts: total=3584,3646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:12.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:12.424 job1: (groupid=0, jobs=1): err= 0: pid=1392968: Tue Oct 8 18:48:40 2024 00:38:12.424 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:38:12.424 slat (usec): min=2, max=11940, avg=111.85, stdev=765.30 00:38:12.424 clat (usec): min=5856, max=47363, avg=15028.11, stdev=8463.33 00:38:12.424 lat (usec): min=5865, max=47370, avg=15139.96, stdev=8510.10 00:38:12.424 clat percentiles (usec): 00:38:12.424 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10290], 00:38:12.424 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11863], 00:38:12.424 | 70.00th=[14484], 80.00th=[16450], 90.00th=[32113], 95.00th=[38011], 00:38:12.424 | 99.00th=[40633], 99.50th=[41157], 99.90th=[45351], 99.95th=[47449], 00:38:12.424 | 99.99th=[47449] 00:38:12.424 write: IOPS=4640, BW=18.1MiB/s (19.0MB/s)(18.3MiB/1008msec); 0 zone resets 00:38:12.424 slat (usec): min=3, max=13811, avg=98.54, stdev=612.25 00:38:12.424 clat (usec): min=1192, max=31622, avg=12404.16, stdev=4232.83 00:38:12.424 lat (usec): min=2807, max=31629, avg=12502.70, stdev=4255.65 00:38:12.424 clat percentiles (usec): 00:38:12.424 | 1.00th=[ 4621], 5.00th=[ 6915], 10.00th=[ 8717], 20.00th=[ 9634], 00:38:12.424 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:38:12.424 | 70.00th=[12256], 80.00th=[15533], 90.00th=[18220], 95.00th=[21627], 00:38:12.424 | 99.00th=[24249], 99.50th=[26346], 99.90th=[31589], 99.95th=[31589], 00:38:12.424 | 99.99th=[31589] 00:38:12.424 bw ( KiB/s): min=17464, max=19400, per=28.38%, avg=18432.00, stdev=1368.96, samples=2 00:38:12.424 iops : min= 4366, max= 4850, avg=4608.00, stdev=342.24, samples=2 00:38:12.424 lat (msec) : 2=0.01%, 4=0.14%, 10=20.16%, 20=68.66%, 50=11.03% 00:38:12.424 cpu : usr=3.48%, sys=5.56%, ctx=393, majf=0, minf=1 00:38:12.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:38:12.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:12.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:12.424 issued rwts: total=4608,4678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:12.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:12.424 job2: (groupid=0, jobs=1): err= 0: pid=1392972: Tue Oct 8 18:48:40 2024 00:38:12.424 read: IOPS=2656, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1011msec) 00:38:12.424 slat (usec): min=2, max=13190, avg=175.06, stdev=1057.64 00:38:12.424 clat (usec): min=2262, max=62958, avg=21749.80, stdev=13169.50 00:38:12.424 lat (usec): min=3744, max=62966, avg=21924.86, stdev=13228.45 00:38:12.424 clat percentiles (usec): 00:38:12.424 | 1.00th=[ 5473], 5.00th=[10290], 10.00th=[11338], 20.00th=[12649], 00:38:12.424 | 30.00th=[12911], 40.00th=[13566], 50.00th=[16581], 60.00th=[17695], 00:38:12.424 | 70.00th=[23987], 80.00th=[33817], 90.00th=[38011], 95.00th=[56361], 00:38:12.424 | 99.00th=[60031], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:38:12.424 | 99.99th=[63177] 00:38:12.424 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:38:12.424 slat (usec): min=3, max=37652, avg=168.39, stdev=1107.93 00:38:12.424 clat (usec): min=4338, max=58695, avg=22176.70, stdev=9114.06 00:38:12.424 lat (usec): min=4348, max=58705, avg=22345.10, stdev=9167.15 00:38:12.424 clat percentiles (usec): 00:38:12.424 | 1.00th=[ 8029], 5.00th=[10683], 10.00th=[10945], 20.00th=[13173], 00:38:12.424 | 30.00th=[16581], 40.00th=[18482], 50.00th=[22414], 60.00th=[22938], 00:38:12.424 | 70.00th=[25035], 80.00th=[29492], 90.00th=[37487], 95.00th=[39060], 00:38:12.424 | 99.00th=[44303], 99.50th=[44827], 99.90th=[54789], 99.95th=[58459], 00:38:12.424 | 99.99th=[58459] 00:38:12.424 bw ( KiB/s): min= 9144, max=15416, per=18.91%, avg=12280.00, stdev=4434.97, samples=2 00:38:12.424 iops : min= 2286, max= 3854, avg=3070.00, stdev=1108.74, samples=2 00:38:12.424 lat (msec) : 4=0.38%, 10=2.17%, 20=49.31%, 50=45.00%, 100=3.14% 00:38:12.424 cpu : usr=2.08%, sys=4.55%, ctx=257, majf=0, minf=1 00:38:12.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:38:12.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:12.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:12.424 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:12.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:12.424 job3: (groupid=0, jobs=1): err= 0: pid=1392973: Tue Oct 8 18:48:40 2024 00:38:12.424 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:38:12.424 slat (usec): min=2, max=19288, avg=103.76, stdev=812.54 00:38:12.424 clat (usec): min=4555, max=39434, avg=13596.37, stdev=4523.69 00:38:12.424 lat (usec): min=4564, max=39450, avg=13700.12, stdev=4571.77 00:38:12.424 clat percentiles (usec): 00:38:12.424 | 1.00th=[ 6456], 5.00th=[10028], 10.00th=[10814], 20.00th=[11076], 00:38:12.424 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[13042], 00:38:12.424 | 70.00th=[13304], 80.00th=[14615], 90.00th=[19792], 95.00th=[21890], 00:38:12.424 | 99.00th=[33162], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:38:12.424 | 99.99th=[39584] 00:38:12.424 write: IOPS=4999, BW=19.5MiB/s (20.5MB/s)(19.7MiB/1007msec); 0 zone resets 00:38:12.424 slat (usec): min=3, max=11855, avg=96.26, stdev=572.94 00:38:12.424 clat (usec): min=1492, max=30028, avg=12836.81, stdev=4311.00 00:38:12.424 lat (usec): min=1502, max=30038, avg=12933.07, stdev=4337.25 00:38:12.424 clat percentiles (usec): 00:38:12.424 | 1.00th=[ 4359], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[10552], 00:38:12.424 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:38:12.424 | 70.00th=[12780], 80.00th=[13304], 90.00th=[16909], 95.00th=[22938], 00:38:12.424 | 99.00th=[29754], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:38:12.424 | 99.99th=[30016] 00:38:12.424 bw ( KiB/s): min=18768, max=20480, per=30.22%, avg=19624.00, stdev=1210.57, samples=2 00:38:12.424 iops : min= 4692, max= 5120, avg=4906.00, stdev=302.64, samples=2 00:38:12.424 lat (msec) : 2=0.04%, 4=0.10%, 10=9.60%, 20=82.21%, 50=8.04% 00:38:12.424 cpu : usr=3.98%, sys=6.66%, ctx=448, majf=0, minf=1 00:38:12.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:38:12.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:12.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:12.424 issued rwts: total=4608,5034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:12.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:12.424 00:38:12.424 Run status group 0 (all jobs): 00:38:12.424 READ: bw=59.8MiB/s (62.7MB/s), 10.4MiB/s-17.9MiB/s (10.9MB/s-18.7MB/s), io=60.5MiB (63.4MB), run=1007-1012msec 00:38:12.424 WRITE: bw=63.4MiB/s (66.5MB/s), 11.9MiB/s-19.5MiB/s (12.4MB/s-20.5MB/s), io=64.2MiB (67.3MB), run=1007-1012msec 00:38:12.424 00:38:12.424 Disk stats (read/write): 00:38:12.424 nvme0n1: ios=3122/3143, merge=0/0, ticks=23444/25066, in_queue=48510, util=85.47% 00:38:12.424 nvme0n2: ios=3676/4096, merge=0/0, ticks=27595/26702, in_queue=54297, util=100.00% 00:38:12.424 nvme0n3: ios=2090/2560, merge=0/0, ticks=25789/29921, in_queue=55710, util=96.50% 00:38:12.424 nvme0n4: ios=3753/4096, merge=0/0, ticks=35465/34769, in_queue=70234, util=89.41% 00:38:12.424 18:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:38:12.424 [global] 00:38:12.424 thread=1 00:38:12.424 invalidate=1 00:38:12.424 rw=randwrite 00:38:12.424 time_based=1 00:38:12.424 runtime=1 00:38:12.424 ioengine=libaio 00:38:12.424 direct=1 00:38:12.424 bs=4096 00:38:12.424 iodepth=128 00:38:12.424 norandommap=0 00:38:12.424 numjobs=1 00:38:12.424 00:38:12.424 verify_dump=1 00:38:12.424 verify_backlog=512 00:38:12.424 verify_state_save=0 00:38:12.424 do_verify=1 00:38:12.424 verify=crc32c-intel 00:38:12.424 [job0] 00:38:12.424 filename=/dev/nvme0n1 00:38:12.424 [job1] 00:38:12.424 filename=/dev/nvme0n2 00:38:12.424 [job2] 00:38:12.424 filename=/dev/nvme0n3 00:38:12.424 [job3] 00:38:12.424 filename=/dev/nvme0n4 00:38:12.424 Could not set queue depth (nvme0n1) 00:38:12.424 Could not set queue depth (nvme0n2) 00:38:12.424 Could not set queue depth (nvme0n3) 00:38:12.424 Could not set queue depth (nvme0n4) 00:38:12.424 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:12.424 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:12.424 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:12.424 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:12.424 fio-3.35 00:38:12.424 Starting 4 threads 00:38:13.800 00:38:13.800 job0: (groupid=0, jobs=1): err= 0: pid=1393203: Tue Oct 8 18:48:42 2024 00:38:13.800 read: IOPS=5704, BW=22.3MiB/s (23.4MB/s)(23.3MiB/1045msec) 00:38:13.800 slat (usec): min=2, max=9771, avg=85.42, stdev=693.81 00:38:13.800 clat (usec): min=4838, max=57859, avg=11865.79, stdev=6550.85 00:38:13.800 lat (usec): min=4843, max=57864, avg=11951.21, stdev=6582.73 00:38:13.800 clat percentiles (usec): 00:38:13.800 | 1.00th=[ 6521], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 9241], 00:38:13.800 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10552], 00:38:13.800 | 70.00th=[10814], 80.00th=[13042], 90.00th=[16450], 95.00th=[18482], 00:38:13.800 | 99.00th=[50594], 99.50th=[53216], 99.90th=[57934], 99.95th=[57934], 00:38:13.800 | 99.99th=[57934] 00:38:13.800 write: IOPS=5879, BW=23.0MiB/s (24.1MB/s)(24.0MiB/1045msec); 0 zone resets 00:38:13.800 slat (usec): min=4, max=8747, avg=73.60, stdev=512.29 00:38:13.800 clat (usec): min=1021, max=20267, avg=10057.95, stdev=2268.72 00:38:13.800 lat (usec): min=1030, max=20273, avg=10131.56, stdev=2299.70 00:38:13.800 clat percentiles (usec): 00:38:13.800 | 1.00th=[ 5604], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 8225], 00:38:13.800 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10683], 00:38:13.800 | 70.00th=[10945], 80.00th=[11207], 90.00th=[13042], 95.00th=[14222], 00:38:13.800 | 99.00th=[15533], 99.50th=[17695], 99.90th=[20055], 99.95th=[20055], 00:38:13.800 | 99.99th=[20317] 00:38:13.800 bw ( KiB/s): min=24576, max=24576, per=36.14%, avg=24576.00, stdev= 0.00, samples=2 00:38:13.800 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:38:13.800 lat (msec) : 2=0.02%, 4=0.15%, 10=46.19%, 20=52.29%, 50=0.31% 00:38:13.800 lat (msec) : 100=1.04% 00:38:13.801 cpu : usr=4.21%, sys=7.66%, ctx=483, majf=0, minf=1 00:38:13.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:38:13.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:13.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:13.801 issued rwts: total=5961,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:13.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:13.801 job1: (groupid=0, jobs=1): err= 0: pid=1393204: Tue Oct 8 18:48:42 2024 00:38:13.801 read: IOPS=3297, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1008msec) 00:38:13.801 slat (usec): min=3, max=15088, avg=105.21, stdev=834.06 00:38:13.801 clat (usec): min=2914, max=47996, avg=15050.30, stdev=6359.22 00:38:13.801 lat (usec): min=4122, max=48003, avg=15155.52, stdev=6420.28 00:38:13.801 clat percentiles (usec): 00:38:13.801 | 1.00th=[ 5669], 5.00th=[ 8291], 10.00th=[11600], 20.00th=[11863], 00:38:13.801 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[13304], 00:38:13.801 | 70.00th=[14746], 80.00th=[18220], 90.00th=[22414], 95.00th=[31065], 00:38:13.801 | 99.00th=[42206], 99.50th=[45351], 99.90th=[47973], 99.95th=[47973], 00:38:13.801 | 99.99th=[47973] 00:38:13.801 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:38:13.801 slat (usec): min=3, max=29987, avg=158.49, stdev=1119.72 00:38:13.801 clat (usec): min=2933, max=66626, avg=21650.08, stdev=15344.43 00:38:13.801 lat (usec): min=2941, max=66633, avg=21808.57, stdev=15447.06 00:38:13.801 clat percentiles (usec): 00:38:13.801 | 1.00th=[ 5407], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[11076], 00:38:13.801 | 30.00th=[11338], 40.00th=[11863], 50.00th=[16188], 60.00th=[22676], 00:38:13.801 | 70.00th=[23462], 80.00th=[25035], 90.00th=[45351], 95.00th=[64226], 00:38:13.801 | 99.00th=[66323], 99.50th=[66323], 99.90th=[66847], 99.95th=[66847], 00:38:13.801 | 99.99th=[66847] 00:38:13.801 bw ( KiB/s): min=13136, max=15536, per=21.08%, avg=14336.00, stdev=1697.06, samples=2 00:38:13.801 iops : min= 3284, max= 3884, avg=3584.00, stdev=424.26, samples=2 00:38:13.801 lat (msec) : 4=0.10%, 10=7.61%, 20=62.88%, 50=24.57%, 100=4.83% 00:38:13.801 cpu : usr=2.78%, sys=4.87%, ctx=295, majf=0, minf=1 00:38:13.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:38:13.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:13.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:13.801 issued rwts: total=3324,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:13.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:13.801 job2: (groupid=0, jobs=1): err= 0: pid=1393205: Tue Oct 8 18:48:42 2024 00:38:13.801 read: IOPS=2019, BW=8079KiB/s (8273kB/s)(8192KiB/1014msec) 00:38:13.801 slat (usec): min=3, max=14864, avg=149.62, stdev=1015.83 00:38:13.801 clat (usec): min=4603, max=45818, avg=17986.31, stdev=7182.09 00:38:13.801 lat (usec): min=4611, max=45825, avg=18135.92, stdev=7252.04 00:38:13.801 clat percentiles (usec): 00:38:13.801 | 1.00th=[ 4817], 5.00th=[12256], 10.00th=[12649], 20.00th=[12911], 00:38:13.801 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13566], 60.00th=[18220], 00:38:13.801 | 70.00th=[19792], 80.00th=[23987], 90.00th=[31589], 95.00th=[32375], 00:38:13.801 | 99.00th=[33817], 99.50th=[34866], 99.90th=[45876], 99.95th=[45876], 00:38:13.801 | 99.99th=[45876] 00:38:13.801 write: IOPS=2372, BW=9491KiB/s (9719kB/s)(9624KiB/1014msec); 0 zone resets 00:38:13.801 slat (usec): min=4, max=18412, avg=282.89, stdev=1407.13 00:38:13.801 clat (msec): min=3, max=138, avg=38.17, stdev=30.89 00:38:13.801 lat (msec): min=3, max=138, avg=38.45, stdev=31.09 00:38:13.801 clat percentiles (msec): 00:38:13.801 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 20], 00:38:13.801 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:38:13.801 | 70.00th=[ 41], 80.00th=[ 58], 90.00th=[ 95], 95.00th=[ 115], 00:38:13.801 | 99.00th=[ 128], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 138], 00:38:13.801 | 99.99th=[ 138] 00:38:13.801 bw ( KiB/s): min= 8496, max= 9728, per=13.40%, avg=9112.00, stdev=871.16, samples=2 00:38:13.801 iops : min= 2124, max= 2432, avg=2278.00, stdev=217.79, samples=2 00:38:13.801 lat (msec) : 4=0.40%, 10=3.32%, 20=39.72%, 50=43.74%, 100=8.02% 00:38:13.801 lat (msec) : 250=4.80% 00:38:13.801 cpu : usr=1.68%, sys=3.26%, ctx=269, majf=0, minf=1 00:38:13.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:38:13.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:13.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:13.801 issued rwts: total=2048,2406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:13.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:13.801 job3: (groupid=0, jobs=1): err= 0: pid=1393206: Tue Oct 8 18:48:42 2024 00:38:13.801 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.1MiB/1011msec) 00:38:13.801 slat (usec): min=3, max=11473, avg=97.92, stdev=824.41 00:38:13.801 clat (usec): min=3979, max=23511, avg=12265.14, stdev=2850.55 00:38:13.801 lat (usec): min=3985, max=23527, avg=12363.06, stdev=2928.92 00:38:13.801 clat percentiles (usec): 00:38:13.801 | 1.00th=[ 8029], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10421], 00:38:13.801 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11863], 00:38:13.801 | 70.00th=[12649], 80.00th=[13173], 90.00th=[16909], 95.00th=[18744], 00:38:13.801 | 99.00th=[21890], 99.50th=[22152], 99.90th=[23200], 99.95th=[23200], 00:38:13.801 | 99.99th=[23462] 00:38:13.801 write: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec); 0 zone resets 00:38:13.801 slat (usec): min=4, max=9981, avg=80.85, stdev=650.67 00:38:13.801 clat (usec): min=2950, max=22710, avg=11496.71, stdev=2793.37 00:38:13.801 lat (usec): min=2959, max=23190, avg=11577.55, stdev=2833.50 00:38:13.801 clat percentiles (usec): 00:38:13.801 | 1.00th=[ 4817], 5.00th=[ 7373], 10.00th=[ 7504], 20.00th=[ 8848], 00:38:13.801 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11600], 60.00th=[11994], 00:38:13.801 | 70.00th=[12256], 80.00th=[12649], 90.00th=[15664], 95.00th=[16188], 00:38:13.801 | 99.00th=[18482], 99.50th=[20841], 99.90th=[22414], 99.95th=[22414], 00:38:13.801 | 99.99th=[22676] 00:38:13.801 bw ( KiB/s): min=21648, max=22680, per=32.59%, avg=22164.00, stdev=729.73, samples=2 00:38:13.801 iops : min= 5412, max= 5670, avg=5541.00, stdev=182.43, samples=2 00:38:13.801 lat (msec) : 4=0.29%, 10=17.27%, 20=80.95%, 50=1.49% 00:38:13.801 cpu : usr=4.36%, sys=7.72%, ctx=290, majf=0, minf=1 00:38:13.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:38:13.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:13.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:13.801 issued rwts: total=5156,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:13.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:13.801 00:38:13.801 Run status group 0 (all jobs): 00:38:13.801 READ: bw=61.6MiB/s (64.6MB/s), 8079KiB/s-22.3MiB/s (8273kB/s-23.4MB/s), io=64.4MiB (67.5MB), run=1008-1045msec 00:38:13.801 WRITE: bw=66.4MiB/s (69.6MB/s), 9491KiB/s-23.0MiB/s (9719kB/s-24.1MB/s), io=69.4MiB (72.8MB), run=1008-1045msec 00:38:13.801 00:38:13.801 Disk stats (read/write): 00:38:13.801 nvme0n1: ios=5033/5120, merge=0/0, ticks=53024/50131, in_queue=103155, util=90.78% 00:38:13.801 nvme0n2: ios=3082/3072, merge=0/0, ticks=45122/60373, in_queue=105495, util=97.97% 00:38:13.801 nvme0n3: ios=1536/1815, merge=0/0, ticks=28277/72852, in_queue=101129, util=88.88% 00:38:13.801 nvme0n4: ios=4382/4608, merge=0/0, ticks=51574/50908, in_queue=102482, util=97.99% 00:38:13.801 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:38:13.801 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1393370 00:38:13.801 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:38:13.801 18:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:38:13.801 [global] 00:38:13.801 thread=1 00:38:13.801 invalidate=1 00:38:13.801 rw=read 00:38:13.801 time_based=1 00:38:13.801 runtime=10 00:38:13.801 ioengine=libaio 00:38:13.801 direct=1 00:38:13.801 bs=4096 00:38:13.801 iodepth=1 00:38:13.801 norandommap=1 00:38:13.801 numjobs=1 00:38:13.801 00:38:13.801 [job0] 00:38:13.801 filename=/dev/nvme0n1 00:38:13.801 [job1] 00:38:13.801 filename=/dev/nvme0n2 00:38:13.801 [job2] 00:38:13.801 filename=/dev/nvme0n3 00:38:13.801 [job3] 00:38:13.801 filename=/dev/nvme0n4 00:38:13.801 Could not set queue depth (nvme0n1) 00:38:13.801 Could not set queue depth (nvme0n2) 00:38:13.801 Could not set queue depth (nvme0n3) 00:38:13.801 Could not set queue depth (nvme0n4) 00:38:14.060 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:14.060 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:14.060 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:14.060 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:14.060 fio-3.35 00:38:14.060 Starting 4 threads 00:38:17.342 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:38:17.342 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:38:17.342 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39493632, buflen=4096 00:38:17.342 fio: pid=1393549, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:17.342 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:17.342 18:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:38:17.601 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=6840320, buflen=4096 00:38:17.601 fio: pid=1393548, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:17.858 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=376832, buflen=4096 00:38:17.858 fio: pid=1393546, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:17.858 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:17.859 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:38:18.116 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=60641280, buflen=4096 00:38:18.116 fio: pid=1393547, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:38:18.375 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:18.375 18:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:38:18.375 00:38:18.375 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1393546: Tue Oct 8 18:48:46 2024 00:38:18.375 read: IOPS=25, BW=100KiB/s (103kB/s)(368KiB/3664msec) 00:38:18.375 slat (usec): min=10, max=12874, avg=154.35, stdev=1333.32 00:38:18.375 clat (usec): min=404, max=45971, avg=39402.94, stdev=8361.91 00:38:18.375 lat (usec): min=441, max=53938, avg=39558.76, stdev=8495.04 00:38:18.375 clat percentiles (usec): 00:38:18.375 | 1.00th=[ 404], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:38:18.375 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:18.375 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:38:18.375 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:38:18.375 | 99.99th=[45876] 00:38:18.375 bw ( KiB/s): min= 96, max= 112, per=0.39%, avg=101.00, stdev= 6.66, samples=7 00:38:18.375 iops : min= 24, max= 28, avg=25.14, stdev= 1.57, samples=7 00:38:18.375 lat (usec) : 500=2.15%, 750=2.15% 00:38:18.375 lat (msec) : 50=94.62% 00:38:18.375 cpu : usr=0.00%, sys=0.08%, ctx=95, majf=0, minf=1 00:38:18.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:18.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.375 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.375 issued rwts: total=93,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:18.375 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1393547: Tue Oct 8 18:48:46 2024 00:38:18.375 read: IOPS=3648, BW=14.3MiB/s (14.9MB/s)(57.8MiB/4058msec) 00:38:18.375 slat (usec): min=4, max=24494, avg=13.63, stdev=254.01 00:38:18.375 clat (usec): min=195, max=13350, avg=256.60, stdev=147.09 00:38:18.375 lat (usec): min=204, max=25132, avg=270.23, stdev=297.97 00:38:18.375 clat percentiles (usec): 00:38:18.375 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 223], 00:38:18.375 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 249], 00:38:18.375 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 322], 00:38:18.375 | 99.00th=[ 537], 99.50th=[ 586], 99.90th=[ 635], 99.95th=[ 1020], 00:38:18.375 | 99.99th=[ 9896] 00:38:18.375 bw ( KiB/s): min=12464, max=15800, per=57.10%, avg=14751.57, stdev=1146.44, samples=7 00:38:18.375 iops : min= 3116, max= 3950, avg=3687.86, stdev=286.59, samples=7 00:38:18.375 lat (usec) : 250=61.82%, 500=36.38%, 750=1.74%, 1000=0.01% 00:38:18.375 lat (msec) : 2=0.04%, 10=0.01%, 20=0.01% 00:38:18.375 cpu : usr=1.40%, sys=3.87%, ctx=14812, majf=0, minf=2 00:38:18.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:18.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.375 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.375 issued rwts: total=14806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:18.375 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1393548: Tue Oct 8 18:48:46 2024 00:38:18.375 read: IOPS=508, BW=2034KiB/s (2083kB/s)(6680KiB/3284msec) 00:38:18.375 slat (nsec): min=7341, max=36203, avg=8870.56, stdev=2279.50 00:38:18.375 clat (usec): min=223, max=42036, avg=1937.35, stdev=7686.85 00:38:18.375 lat (usec): min=231, max=42051, avg=1946.21, stdev=7688.15 00:38:18.375 clat percentiles (usec): 00:38:18.375 | 1.00th=[ 302], 5.00th=[ 412], 10.00th=[ 416], 20.00th=[ 420], 00:38:18.375 | 30.00th=[ 424], 40.00th=[ 424], 50.00th=[ 429], 60.00th=[ 433], 00:38:18.375 | 70.00th=[ 437], 80.00th=[ 445], 90.00th=[ 457], 95.00th=[ 474], 00:38:18.375 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:38:18.375 | 99.99th=[42206] 00:38:18.375 bw ( KiB/s): min= 96, max= 9080, per=8.58%, avg=2217.33, stdev=3680.47, samples=6 00:38:18.375 iops : min= 24, max= 2270, avg=554.33, stdev=920.12, samples=6 00:38:18.375 lat (usec) : 250=0.12%, 500=95.81%, 750=0.30% 00:38:18.375 lat (msec) : 50=3.71% 00:38:18.375 cpu : usr=0.24%, sys=0.70%, ctx=1671, majf=0, minf=2 00:38:18.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:18.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.375 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.375 issued rwts: total=1671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:18.375 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1393549: Tue Oct 8 18:48:46 2024 00:38:18.375 read: IOPS=3310, BW=12.9MiB/s (13.6MB/s)(37.7MiB/2913msec) 00:38:18.375 slat (nsec): min=6885, max=43003, avg=8788.34, stdev=2217.82 00:38:18.375 clat (usec): min=226, max=10245, avg=288.62, stdev=109.07 00:38:18.375 lat (usec): min=234, max=10254, avg=297.41, stdev=109.09 00:38:18.375 clat percentiles (usec): 00:38:18.376 | 1.00th=[ 243], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 262], 00:38:18.376 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:38:18.376 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 351], 00:38:18.376 | 99.00th=[ 441], 99.50th=[ 457], 99.90th=[ 510], 99.95th=[ 545], 00:38:18.376 | 99.99th=[10290] 00:38:18.376 bw ( KiB/s): min=12088, max=14296, per=51.17%, avg=13220.80, stdev=809.40, samples=5 00:38:18.376 iops : min= 3022, max= 3574, avg=3305.20, stdev=202.35, samples=5 00:38:18.376 lat (usec) : 250=7.49%, 500=92.37%, 750=0.10% 00:38:18.376 lat (msec) : 2=0.02%, 20=0.01% 00:38:18.376 cpu : usr=0.96%, sys=5.25%, ctx=9643, majf=0, minf=2 00:38:18.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:18.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.376 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:18.376 issued rwts: total=9643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:18.376 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:18.376 00:38:18.376 Run status group 0 (all jobs): 00:38:18.376 READ: bw=25.2MiB/s (26.5MB/s), 100KiB/s-14.3MiB/s (103kB/s-14.9MB/s), io=102MiB (107MB), run=2913-4058msec 00:38:18.376 00:38:18.376 Disk stats (read/write): 00:38:18.376 nvme0n1: ios=91/0, merge=0/0, ticks=3586/0, in_queue=3586, util=95.78% 00:38:18.376 nvme0n2: ios=14066/0, merge=0/0, ticks=3604/0, in_queue=3604, util=95.13% 00:38:18.376 nvme0n3: ios=1666/0, merge=0/0, ticks=3065/0, in_queue=3065, util=96.72% 00:38:18.376 nvme0n4: ios=9497/0, merge=0/0, ticks=2736/0, in_queue=2736, util=96.71% 00:38:18.941 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:18.942 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:38:19.200 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:19.200 18:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:38:19.767 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:19.767 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:38:20.337 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:20.337 18:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:38:21.272 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:38:21.272 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1393370 00:38:21.272 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:38:21.272 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:21.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:21.273 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:21.273 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:38:21.273 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:38:21.273 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:21.273 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:38:21.273 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:21.273 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:38:21.273 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:38:21.273 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:38:21.273 nvmf hotplug test: fio failed as expected 00:38:21.273 18:48:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:21.841 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:38:21.841 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:38:21.841 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:38:21.841 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:38:21.841 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:38:21.841 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:21.841 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:38:21.841 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:21.841 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:38:21.841 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:21.841 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:21.841 rmmod nvme_tcp 00:38:21.841 rmmod nvme_fabrics 00:38:22.100 rmmod nvme_keyring 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1391047 ']' 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1391047 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1391047 ']' 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1391047 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1391047 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1391047' 00:38:22.100 killing process with pid 1391047 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1391047 00:38:22.100 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1391047 00:38:22.358 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:22.358 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:22.358 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:22.358 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:38:22.358 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:38:22.358 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:22.358 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:38:22.358 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:22.358 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:22.358 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.358 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:22.358 18:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:24.890 18:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:24.890 00:38:24.890 real 0m31.617s 00:38:24.890 user 1m26.219s 00:38:24.890 sys 0m13.158s 00:38:24.890 18:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:24.890 18:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:24.890 ************************************ 00:38:24.890 END TEST nvmf_fio_target 00:38:24.890 ************************************ 00:38:24.890 18:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:38:24.890 18:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:24.890 18:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:24.890 18:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:24.890 ************************************ 00:38:24.890 START TEST nvmf_bdevio 00:38:24.890 ************************************ 00:38:24.890 18:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:38:24.890 * Looking for test storage... 00:38:24.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:24.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.890 --rc genhtml_branch_coverage=1 00:38:24.890 --rc genhtml_function_coverage=1 00:38:24.890 --rc genhtml_legend=1 00:38:24.890 --rc geninfo_all_blocks=1 00:38:24.890 --rc geninfo_unexecuted_blocks=1 00:38:24.890 00:38:24.890 ' 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:24.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.890 --rc genhtml_branch_coverage=1 00:38:24.890 --rc genhtml_function_coverage=1 00:38:24.890 --rc genhtml_legend=1 00:38:24.890 --rc geninfo_all_blocks=1 00:38:24.890 --rc geninfo_unexecuted_blocks=1 00:38:24.890 00:38:24.890 ' 00:38:24.890 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:24.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.891 --rc genhtml_branch_coverage=1 00:38:24.891 --rc genhtml_function_coverage=1 00:38:24.891 --rc genhtml_legend=1 00:38:24.891 --rc geninfo_all_blocks=1 00:38:24.891 --rc geninfo_unexecuted_blocks=1 00:38:24.891 00:38:24.891 ' 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:24.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.891 --rc genhtml_branch_coverage=1 00:38:24.891 --rc genhtml_function_coverage=1 00:38:24.891 --rc genhtml_legend=1 00:38:24.891 --rc geninfo_all_blocks=1 00:38:24.891 --rc geninfo_unexecuted_blocks=1 00:38:24.891 00:38:24.891 ' 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:38:24.891 18:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:27.422 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:27.423 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:27.423 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:27.423 Found net devices under 0000:84:00.0: cvl_0_0 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:27.423 Found net devices under 0000:84:00.1: cvl_0_1 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:27.423 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:27.682 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:27.682 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:27.682 18:48:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:27.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:27.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:38:27.682 00:38:27.682 --- 10.0.0.2 ping statistics --- 00:38:27.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.682 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:27.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:27.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:38:27.682 00:38:27.682 --- 10.0.0.1 ping statistics --- 00:38:27.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.682 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1396461 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1396461 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1396461 ']' 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:27.682 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:27.682 [2024-10-08 18:48:56.117975] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:27.682 [2024-10-08 18:48:56.119246] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:38:27.682 [2024-10-08 18:48:56.119316] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:27.682 [2024-10-08 18:48:56.196272] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:27.941 [2024-10-08 18:48:56.323757] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:27.941 [2024-10-08 18:48:56.323828] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:27.941 [2024-10-08 18:48:56.323845] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:27.941 [2024-10-08 18:48:56.323858] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:27.941 [2024-10-08 18:48:56.323870] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:27.941 [2024-10-08 18:48:56.325872] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:38:27.941 [2024-10-08 18:48:56.325927] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:38:27.941 [2024-10-08 18:48:56.325981] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:38:27.941 [2024-10-08 18:48:56.325985] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:38:27.941 [2024-10-08 18:48:56.444612] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:27.941 [2024-10-08 18:48:56.444866] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:27.941 [2024-10-08 18:48:56.445159] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:27.941 [2024-10-08 18:48:56.445823] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:27.941 [2024-10-08 18:48:56.446091] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:27.941 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:27.941 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:38:27.941 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:28.200 [2024-10-08 18:48:56.518770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:28.200 Malloc0 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:28.200 [2024-10-08 18:48:56.582936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:28.200 { 00:38:28.200 "params": { 00:38:28.200 "name": "Nvme$subsystem", 00:38:28.200 "trtype": "$TEST_TRANSPORT", 00:38:28.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:28.200 "adrfam": "ipv4", 00:38:28.200 "trsvcid": "$NVMF_PORT", 00:38:28.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:28.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:28.200 "hdgst": ${hdgst:-false}, 00:38:28.200 "ddgst": ${ddgst:-false} 00:38:28.200 }, 00:38:28.200 "method": "bdev_nvme_attach_controller" 00:38:28.200 } 00:38:28.200 EOF 00:38:28.200 )") 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:38:28.200 18:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:28.200 "params": { 00:38:28.200 "name": "Nvme1", 00:38:28.200 "trtype": "tcp", 00:38:28.200 "traddr": "10.0.0.2", 00:38:28.200 "adrfam": "ipv4", 00:38:28.200 "trsvcid": "4420", 00:38:28.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:28.200 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:28.200 "hdgst": false, 00:38:28.200 "ddgst": false 00:38:28.200 }, 00:38:28.200 "method": "bdev_nvme_attach_controller" 00:38:28.200 }' 00:38:28.200 [2024-10-08 18:48:56.653934] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:38:28.200 [2024-10-08 18:48:56.654105] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396559 ] 00:38:28.200 [2024-10-08 18:48:56.734061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:28.458 [2024-10-08 18:48:56.860767] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:38:28.458 [2024-10-08 18:48:56.860822] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:38:28.458 [2024-10-08 18:48:56.860826] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.716 I/O targets: 00:38:28.716 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:38:28.716 00:38:28.716 00:38:28.716 CUnit - A unit testing framework for C - Version 2.1-3 00:38:28.716 http://cunit.sourceforge.net/ 00:38:28.716 00:38:28.716 00:38:28.716 Suite: bdevio tests on: Nvme1n1 00:38:28.716 Test: blockdev write read block ...passed 00:38:28.716 Test: blockdev write zeroes read block ...passed 00:38:28.716 Test: blockdev write zeroes read no split ...passed 00:38:28.716 Test: blockdev write zeroes read split ...passed 00:38:28.716 Test: blockdev write zeroes read split partial ...passed 00:38:28.716 Test: blockdev reset ...[2024-10-08 18:48:57.156485] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:28.716 [2024-10-08 18:48:57.156594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedcf40 (9): Bad file descriptor 00:38:28.974 [2024-10-08 18:48:57.289989] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:28.974 passed 00:38:28.974 Test: blockdev write read 8 blocks ...passed 00:38:28.974 Test: blockdev write read size > 128k ...passed 00:38:28.974 Test: blockdev write read invalid size ...passed 00:38:28.974 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:28.974 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:28.974 Test: blockdev write read max offset ...passed 00:38:28.974 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:28.974 Test: blockdev writev readv 8 blocks ...passed 00:38:28.974 Test: blockdev writev readv 30 x 1block ...passed 00:38:28.974 Test: blockdev writev readv block ...passed 00:38:28.974 Test: blockdev writev readv size > 128k ...passed 00:38:28.974 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:28.974 Test: blockdev comparev and writev ...[2024-10-08 18:48:57.462862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:28.974 [2024-10-08 18:48:57.462899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:28.974 [2024-10-08 18:48:57.462924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:28.974 [2024-10-08 18:48:57.462943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:28.974 [2024-10-08 18:48:57.463458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:28.974 [2024-10-08 18:48:57.463494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:28.974 [2024-10-08 18:48:57.463537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:28.974 [2024-10-08 18:48:57.463558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:28.974 [2024-10-08 18:48:57.464020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:28.974 [2024-10-08 18:48:57.464046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:28.974 [2024-10-08 18:48:57.464070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:28.974 [2024-10-08 18:48:57.464086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:28.974 [2024-10-08 18:48:57.464495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:28.974 [2024-10-08 18:48:57.464519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:28.974 [2024-10-08 18:48:57.464541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:28.974 [2024-10-08 18:48:57.464557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:28.974 passed 00:38:29.233 Test: blockdev nvme passthru rw ...passed 00:38:29.233 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:48:57.547082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:29.233 [2024-10-08 18:48:57.547109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:29.233 [2024-10-08 18:48:57.547261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:29.233 [2024-10-08 18:48:57.547285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:29.233 [2024-10-08 18:48:57.547428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:29.233 [2024-10-08 18:48:57.547451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:29.233 [2024-10-08 18:48:57.547603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:29.233 [2024-10-08 18:48:57.547626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:29.233 passed 00:38:29.233 Test: blockdev nvme admin passthru ...passed 00:38:29.233 Test: blockdev copy ...passed 00:38:29.233 00:38:29.233 Run Summary: Type Total Ran Passed Failed Inactive 00:38:29.233 suites 1 1 n/a 0 0 00:38:29.233 tests 23 23 23 0 0 00:38:29.233 asserts 152 152 152 0 n/a 00:38:29.233 00:38:29.233 Elapsed time = 1.120 seconds 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:29.491 rmmod nvme_tcp 00:38:29.491 rmmod nvme_fabrics 00:38:29.491 rmmod nvme_keyring 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1396461 ']' 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1396461 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1396461 ']' 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1396461 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1396461 00:38:29.491 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:38:29.492 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:38:29.492 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1396461' 00:38:29.492 killing process with pid 1396461 00:38:29.492 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1396461 00:38:29.492 18:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1396461 00:38:29.750 18:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:29.750 18:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:29.750 18:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:29.750 18:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:38:29.750 18:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:38:29.750 18:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:29.750 18:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:38:29.750 18:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:29.750 18:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:29.750 18:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:29.750 18:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:29.750 18:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:32.397 18:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:32.397 00:38:32.397 real 0m7.395s 00:38:32.397 user 0m9.074s 00:38:32.397 sys 0m3.259s 00:38:32.397 18:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:32.397 18:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:32.397 ************************************ 00:38:32.397 END TEST nvmf_bdevio 00:38:32.397 ************************************ 00:38:32.397 18:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:38:32.397 00:38:32.397 real 4m52.755s 00:38:32.397 user 10m15.925s 00:38:32.397 sys 1m45.411s 00:38:32.397 18:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:32.397 18:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:32.397 ************************************ 00:38:32.397 END TEST nvmf_target_core_interrupt_mode 00:38:32.397 ************************************ 00:38:32.397 18:49:00 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:38:32.397 18:49:00 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:32.397 18:49:00 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:32.397 18:49:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:32.397 ************************************ 00:38:32.397 START TEST nvmf_interrupt 00:38:32.397 ************************************ 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:38:32.397 * Looking for test storage... 00:38:32.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:32.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.397 --rc genhtml_branch_coverage=1 00:38:32.397 --rc genhtml_function_coverage=1 00:38:32.397 --rc genhtml_legend=1 00:38:32.397 --rc geninfo_all_blocks=1 00:38:32.397 --rc geninfo_unexecuted_blocks=1 00:38:32.397 00:38:32.397 ' 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:32.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.397 --rc genhtml_branch_coverage=1 00:38:32.397 --rc genhtml_function_coverage=1 00:38:32.397 --rc genhtml_legend=1 00:38:32.397 --rc geninfo_all_blocks=1 00:38:32.397 --rc geninfo_unexecuted_blocks=1 00:38:32.397 00:38:32.397 ' 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:32.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.397 --rc genhtml_branch_coverage=1 00:38:32.397 --rc genhtml_function_coverage=1 00:38:32.397 --rc genhtml_legend=1 00:38:32.397 --rc geninfo_all_blocks=1 00:38:32.397 --rc geninfo_unexecuted_blocks=1 00:38:32.397 00:38:32.397 ' 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:32.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.397 --rc genhtml_branch_coverage=1 00:38:32.397 --rc genhtml_function_coverage=1 00:38:32.397 --rc genhtml_legend=1 00:38:32.397 --rc geninfo_all_blocks=1 00:38:32.397 --rc geninfo_unexecuted_blocks=1 00:38:32.397 00:38:32.397 ' 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:32.397 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:38:32.398 18:49:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:38:35.689 Found 0000:84:00.0 (0x8086 - 0x159b) 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:35.689 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:38:35.690 Found 0000:84:00.1 (0x8086 - 0x159b) 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:38:35.690 Found net devices under 0000:84:00.0: cvl_0_0 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:38:35.690 Found net devices under 0000:84:00.1: cvl_0_1 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:35.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:35.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:38:35.690 00:38:35.690 --- 10.0.0.2 ping statistics --- 00:38:35.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.690 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:35.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:35.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:38:35.690 00:38:35.690 --- 10.0.0.1 ping statistics --- 00:38:35.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.690 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=1398732 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 1398732 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1398732 ']' 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:35.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:35.690 18:49:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:35.690 [2024-10-08 18:49:03.761049] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:35.690 [2024-10-08 18:49:03.762542] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:38:35.690 [2024-10-08 18:49:03.762621] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:35.690 [2024-10-08 18:49:03.888090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:35.690 [2024-10-08 18:49:04.108084] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:35.690 [2024-10-08 18:49:04.108196] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:35.690 [2024-10-08 18:49:04.108247] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:35.690 [2024-10-08 18:49:04.108280] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:35.690 [2024-10-08 18:49:04.108306] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:35.690 [2024-10-08 18:49:04.112703] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:38:35.690 [2024-10-08 18:49:04.112737] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.949 [2024-10-08 18:49:04.292022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:35.949 [2024-10-08 18:49:04.292151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:35.949 [2024-10-08 18:49:04.292674] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:35.949 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:35.949 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:38:35.949 18:49:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:35.949 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:35.949 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:35.949 18:49:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:35.949 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:38:35.949 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:38:35.949 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:38:35.949 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:38:35.949 5000+0 records in 00:38:35.949 5000+0 records out 00:38:35.949 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0232642 s, 440 MB/s 00:38:35.949 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:38:35.949 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.949 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:36.207 AIO0 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:36.207 [2024-10-08 18:49:04.513985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:36.207 [2024-10-08 18:49:04.550381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1398732 0 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1398732 0 idle 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1398732 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1398732 -w 256 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1398732 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.59 reactor_0' 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1398732 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.59 reactor_0 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1398732 1 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1398732 1 idle 00:38:36.207 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1398732 00:38:36.208 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:36.208 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:36.208 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:36.208 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:36.208 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:36.208 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:36.208 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:36.208 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:36.208 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:36.208 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1398732 -w 256 00:38:36.208 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1398826 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1398826 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1398929 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1398732 0 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1398732 0 busy 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1398732 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1398732 -w 256 00:38:36.466 18:49:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1398732 root 20 0 128.2g 48384 34944 R 62.5 0.1 0:00.69 reactor_0' 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1398732 root 20 0 128.2g 48384 34944 R 62.5 0.1 0:00.69 reactor_0 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=62.5 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=62 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1398732 1 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1398732 1 busy 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1398732 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1398732 -w 256 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1398826 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.21 reactor_1' 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1398826 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.21 reactor_1 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:36.724 18:49:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1398929 00:38:46.690 Initializing NVMe Controllers 00:38:46.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:46.690 Controller IO queue size 256, less than required. 00:38:46.690 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:46.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:46.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:46.690 Initialization complete. Launching workers. 00:38:46.690 ======================================================== 00:38:46.690 Latency(us) 00:38:46.690 Device Information : IOPS MiB/s Average min max 00:38:46.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 14073.14 54.97 18202.70 5029.12 60057.03 00:38:46.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13961.84 54.54 18349.80 4995.46 61025.10 00:38:46.690 ======================================================== 00:38:46.690 Total : 28034.97 109.51 18275.96 4995.46 61025.10 00:38:46.690 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1398732 0 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1398732 0 idle 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1398732 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1398732 -w 256 00:38:46.690 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1398732 root 20 0 128.2g 48384 34944 S 6.7 0.1 0:20.56 reactor_0' 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1398732 root 20 0 128.2g 48384 34944 S 6.7 0.1 0:20.56 reactor_0 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1398732 1 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1398732 1 idle 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1398732 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1398732 -w 256 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1398826 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1' 00:38:46.948 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1398826 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1 00:38:47.208 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:47.208 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:47.208 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:47.208 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:47.208 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:47.208 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:47.208 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:47.208 18:49:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:47.208 18:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:47.467 18:49:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:38:47.467 18:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:38:47.467 18:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:38:47.467 18:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:38:47.467 18:49:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1398732 0 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1398732 0 idle 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1398732 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1398732 -w 256 00:38:49.376 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:49.637 18:49:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1398732 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.75 reactor_0' 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1398732 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.75 reactor_0 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1398732 1 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1398732 1 idle 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1398732 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1398732 -w 256 00:38:49.637 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1398826 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.04 reactor_1' 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1398826 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.04 reactor_1 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:49.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:49.898 rmmod nvme_tcp 00:38:49.898 rmmod nvme_fabrics 00:38:49.898 rmmod nvme_keyring 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 1398732 ']' 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 1398732 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1398732 ']' 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1398732 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1398732 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1398732' 00:38:49.898 killing process with pid 1398732 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1398732 00:38:49.898 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1398732 00:38:50.466 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:50.466 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:50.466 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:50.466 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:38:50.466 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:38:50.466 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:50.466 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:38:50.466 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:50.466 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:50.466 18:49:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.466 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:50.466 18:49:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:53.010 18:49:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:53.010 00:38:53.010 real 0m20.483s 00:38:53.010 user 0m38.513s 00:38:53.010 sys 0m7.562s 00:38:53.010 18:49:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:53.010 18:49:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:53.010 ************************************ 00:38:53.010 END TEST nvmf_interrupt 00:38:53.010 ************************************ 00:38:53.010 00:38:53.010 real 32m23.994s 00:38:53.010 user 74m3.306s 00:38:53.010 sys 8m26.841s 00:38:53.010 18:49:20 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:53.010 18:49:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:53.010 ************************************ 00:38:53.010 END TEST nvmf_tcp 00:38:53.010 ************************************ 00:38:53.010 18:49:20 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:38:53.010 18:49:20 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:53.010 18:49:20 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:53.010 18:49:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:53.010 18:49:20 -- common/autotest_common.sh@10 -- # set +x 00:38:53.010 ************************************ 00:38:53.010 START TEST spdkcli_nvmf_tcp 00:38:53.010 ************************************ 00:38:53.010 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:53.010 * Looking for test storage... 00:38:53.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:38:53.010 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:53.010 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:38:53.010 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:53.010 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:53.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.011 --rc genhtml_branch_coverage=1 00:38:53.011 --rc genhtml_function_coverage=1 00:38:53.011 --rc genhtml_legend=1 00:38:53.011 --rc geninfo_all_blocks=1 00:38:53.011 --rc geninfo_unexecuted_blocks=1 00:38:53.011 00:38:53.011 ' 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:53.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.011 --rc genhtml_branch_coverage=1 00:38:53.011 --rc genhtml_function_coverage=1 00:38:53.011 --rc genhtml_legend=1 00:38:53.011 --rc geninfo_all_blocks=1 00:38:53.011 --rc geninfo_unexecuted_blocks=1 00:38:53.011 00:38:53.011 ' 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:53.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.011 --rc genhtml_branch_coverage=1 00:38:53.011 --rc genhtml_function_coverage=1 00:38:53.011 --rc genhtml_legend=1 00:38:53.011 --rc geninfo_all_blocks=1 00:38:53.011 --rc geninfo_unexecuted_blocks=1 00:38:53.011 00:38:53.011 ' 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:53.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.011 --rc genhtml_branch_coverage=1 00:38:53.011 --rc genhtml_function_coverage=1 00:38:53.011 --rc genhtml_legend=1 00:38:53.011 --rc geninfo_all_blocks=1 00:38:53.011 --rc geninfo_unexecuted_blocks=1 00:38:53.011 00:38:53.011 ' 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:53.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1400901 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1400901 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1400901 ']' 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:53.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:53.011 18:49:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:53.012 [2024-10-08 18:49:21.424761] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:38:53.012 [2024-10-08 18:49:21.424861] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1400901 ] 00:38:53.012 [2024-10-08 18:49:21.533400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:53.272 [2024-10-08 18:49:21.731324] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:38:53.272 [2024-10-08 18:49:21.731344] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.533 18:49:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:53.533 18:49:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:38:53.533 18:49:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:38:53.533 18:49:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:53.533 18:49:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:53.533 18:49:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:38:53.533 18:49:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:38:53.533 18:49:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:53.794 18:49:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:53.794 18:49:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:53.794 18:49:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:53.794 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:53.794 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:53.794 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:53.794 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:53.794 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:53.794 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:53.794 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:53.794 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:53.794 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:53.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:53.794 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:53.794 ' 00:38:57.090 [2024-10-08 18:49:25.375613] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:58.470 [2024-10-08 18:49:26.850264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:39:01.004 [2024-10-08 18:49:29.351194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:39:03.543 [2024-10-08 18:49:31.499517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:39:04.924 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:39:04.924 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:39:04.924 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:39:04.924 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:39:04.924 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:39:04.924 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:39:04.924 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:39:04.924 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:04.924 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:04.924 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:39:04.924 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:39:04.924 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:39:04.924 18:49:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:39:04.924 18:49:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:04.924 18:49:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:04.924 18:49:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:39:04.924 18:49:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:04.924 18:49:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:04.924 18:49:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:39:04.924 18:49:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:39:05.493 18:49:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:39:05.493 18:49:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:39:05.493 18:49:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:39:05.493 18:49:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:05.493 18:49:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:05.493 18:49:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:39:05.493 18:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:05.493 18:49:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:05.493 18:49:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:39:05.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:39:05.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:05.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:39:05.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:39:05.493 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:39:05.493 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:39:05.493 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:39:05.493 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:39:05.493 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:39:05.493 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:39:05.493 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:39:05.493 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:39:05.493 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:39:05.493 ' 00:39:12.067 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:39:12.067 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:39:12.067 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:12.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:39:12.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:39:12.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:39:12.068 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:39:12.068 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:39:12.068 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:39:12.068 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:39:12.068 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:39:12.068 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:39:12.068 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:39:12.068 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1400901 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1400901 ']' 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1400901 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1400901 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1400901' 00:39:12.068 killing process with pid 1400901 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1400901 00:39:12.068 18:49:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1400901 00:39:12.068 18:49:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:39:12.068 18:49:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:39:12.068 18:49:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1400901 ']' 00:39:12.068 18:49:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1400901 00:39:12.068 18:49:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1400901 ']' 00:39:12.068 18:49:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1400901 00:39:12.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1400901) - No such process 00:39:12.068 18:49:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1400901 is not found' 00:39:12.068 Process with pid 1400901 is not found 00:39:12.068 18:49:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:39:12.068 18:49:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:39:12.068 18:49:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:39:12.068 00:39:12.068 real 0m19.036s 00:39:12.068 user 0m41.438s 00:39:12.068 sys 0m1.295s 00:39:12.068 18:49:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:12.068 18:49:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:12.068 ************************************ 00:39:12.068 END TEST spdkcli_nvmf_tcp 00:39:12.068 ************************************ 00:39:12.068 18:49:40 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:12.068 18:49:40 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:12.068 18:49:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:12.068 18:49:40 -- common/autotest_common.sh@10 -- # set +x 00:39:12.068 ************************************ 00:39:12.068 START TEST nvmf_identify_passthru 00:39:12.068 ************************************ 00:39:12.068 18:49:40 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:39:12.068 * Looking for test storage... 00:39:12.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:12.068 18:49:40 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:12.068 18:49:40 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:39:12.068 18:49:40 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:12.068 18:49:40 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:39:12.068 18:49:40 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:12.068 18:49:40 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:12.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.068 --rc genhtml_branch_coverage=1 00:39:12.068 --rc genhtml_function_coverage=1 00:39:12.068 --rc genhtml_legend=1 00:39:12.068 --rc geninfo_all_blocks=1 00:39:12.068 --rc geninfo_unexecuted_blocks=1 00:39:12.068 00:39:12.068 ' 00:39:12.068 18:49:40 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:12.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.068 --rc genhtml_branch_coverage=1 00:39:12.068 --rc genhtml_function_coverage=1 00:39:12.068 --rc genhtml_legend=1 00:39:12.068 --rc geninfo_all_blocks=1 00:39:12.068 --rc geninfo_unexecuted_blocks=1 00:39:12.068 00:39:12.068 ' 00:39:12.068 18:49:40 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:12.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.068 --rc genhtml_branch_coverage=1 00:39:12.068 --rc genhtml_function_coverage=1 00:39:12.068 --rc genhtml_legend=1 00:39:12.068 --rc geninfo_all_blocks=1 00:39:12.068 --rc geninfo_unexecuted_blocks=1 00:39:12.068 00:39:12.068 ' 00:39:12.068 18:49:40 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:12.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:12.068 --rc genhtml_branch_coverage=1 00:39:12.068 --rc genhtml_function_coverage=1 00:39:12.068 --rc genhtml_legend=1 00:39:12.068 --rc geninfo_all_blocks=1 00:39:12.068 --rc geninfo_unexecuted_blocks=1 00:39:12.068 00:39:12.068 ' 00:39:12.068 18:49:40 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:12.068 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.068 18:49:40 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.069 18:49:40 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.069 18:49:40 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.069 18:49:40 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.069 18:49:40 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:12.069 18:49:40 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:12.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:12.069 18:49:40 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:12.069 18:49:40 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:39:12.069 18:49:40 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.069 18:49:40 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.069 18:49:40 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.069 18:49:40 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.069 18:49:40 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.069 18:49:40 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.069 18:49:40 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:39:12.069 18:49:40 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.069 18:49:40 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:12.069 18:49:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:12.069 18:49:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:12.069 18:49:40 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:39:12.069 18:49:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:15.363 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:15.363 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:15.363 Found net devices under 0000:84:00.0: cvl_0_0 00:39:15.363 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:15.364 Found net devices under 0000:84:00.1: cvl_0_1 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:15.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:15.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:39:15.364 00:39:15.364 --- 10.0.0.2 ping statistics --- 00:39:15.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.364 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:15.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:15.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:39:15.364 00:39:15.364 --- 10.0.0.1 ping statistics --- 00:39:15.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.364 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:15.364 18:49:43 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:15.364 18:49:43 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:15.364 18:49:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:82:00.0 00:39:15.364 18:49:43 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:82:00.0 00:39:15.364 18:49:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:39:15.364 18:49:43 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:39:15.364 18:49:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:39:15.364 18:49:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:39:15.364 18:49:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:39:19.561 18:49:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:39:19.561 18:49:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:39:19.561 18:49:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:39:19.561 18:49:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:39:23.774 18:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:39:23.774 18:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:39:23.774 18:49:52 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:23.774 18:49:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:23.774 18:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:39:23.774 18:49:52 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:23.774 18:49:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:23.774 18:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1405791 00:39:23.774 18:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:39:23.774 18:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:23.774 18:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1405791 00:39:23.774 18:49:52 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1405791 ']' 00:39:23.774 18:49:52 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:23.774 18:49:52 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:23.774 18:49:52 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:23.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:23.774 18:49:52 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:23.775 18:49:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:23.775 [2024-10-08 18:49:52.280176] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:39:23.775 [2024-10-08 18:49:52.280288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:24.036 [2024-10-08 18:49:52.398024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:24.295 [2024-10-08 18:49:52.618765] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:24.295 [2024-10-08 18:49:52.618883] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:24.295 [2024-10-08 18:49:52.618921] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:24.295 [2024-10-08 18:49:52.618950] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:24.295 [2024-10-08 18:49:52.618976] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:24.295 [2024-10-08 18:49:52.622723] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:24.295 [2024-10-08 18:49:52.622796] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:39:24.295 [2024-10-08 18:49:52.622800] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:24.295 [2024-10-08 18:49:52.622754] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:39:25.233 18:49:53 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:25.233 INFO: Log level set to 20 00:39:25.233 INFO: Requests: 00:39:25.233 { 00:39:25.233 "jsonrpc": "2.0", 00:39:25.233 "method": "nvmf_set_config", 00:39:25.233 "id": 1, 00:39:25.233 "params": { 00:39:25.233 "admin_cmd_passthru": { 00:39:25.233 "identify_ctrlr": true 00:39:25.233 } 00:39:25.233 } 00:39:25.233 } 00:39:25.233 00:39:25.233 INFO: response: 00:39:25.233 { 00:39:25.233 "jsonrpc": "2.0", 00:39:25.233 "id": 1, 00:39:25.233 "result": true 00:39:25.233 } 00:39:25.233 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.233 18:49:53 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:25.233 INFO: Setting log level to 20 00:39:25.233 INFO: Setting log level to 20 00:39:25.233 INFO: Log level set to 20 00:39:25.233 INFO: Log level set to 20 00:39:25.233 INFO: Requests: 00:39:25.233 { 00:39:25.233 "jsonrpc": "2.0", 00:39:25.233 "method": "framework_start_init", 00:39:25.233 "id": 1 00:39:25.233 } 00:39:25.233 00:39:25.233 INFO: Requests: 00:39:25.233 { 00:39:25.233 "jsonrpc": "2.0", 00:39:25.233 "method": "framework_start_init", 00:39:25.233 "id": 1 00:39:25.233 } 00:39:25.233 00:39:25.233 [2024-10-08 18:49:53.590871] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:39:25.233 INFO: response: 00:39:25.233 { 00:39:25.233 "jsonrpc": "2.0", 00:39:25.233 "id": 1, 00:39:25.233 "result": true 00:39:25.233 } 00:39:25.233 00:39:25.233 INFO: response: 00:39:25.233 { 00:39:25.233 "jsonrpc": "2.0", 00:39:25.233 "id": 1, 00:39:25.233 "result": true 00:39:25.233 } 00:39:25.233 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.233 18:49:53 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:25.233 INFO: Setting log level to 40 00:39:25.233 INFO: Setting log level to 40 00:39:25.233 INFO: Setting log level to 40 00:39:25.233 [2024-10-08 18:49:53.600893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.233 18:49:53 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:25.233 18:49:53 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.233 18:49:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:28.529 Nvme0n1 00:39:28.529 18:49:56 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.529 18:49:56 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:39:28.529 18:49:56 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.529 18:49:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:28.529 18:49:56 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.529 18:49:56 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:39:28.529 18:49:56 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.529 18:49:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:28.529 18:49:56 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.529 18:49:56 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:28.529 18:49:56 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.529 18:49:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:28.529 [2024-10-08 18:49:56.498960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:28.529 18:49:56 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.529 18:49:56 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:39:28.529 18:49:56 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.529 18:49:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:28.529 [ 00:39:28.529 { 00:39:28.529 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:28.529 "subtype": "Discovery", 00:39:28.529 "listen_addresses": [], 00:39:28.529 "allow_any_host": true, 00:39:28.529 "hosts": [] 00:39:28.529 }, 00:39:28.529 { 00:39:28.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:39:28.529 "subtype": "NVMe", 00:39:28.529 "listen_addresses": [ 00:39:28.529 { 00:39:28.529 "trtype": "TCP", 00:39:28.529 "adrfam": "IPv4", 00:39:28.529 "traddr": "10.0.0.2", 00:39:28.529 "trsvcid": "4420" 00:39:28.529 } 00:39:28.529 ], 00:39:28.529 "allow_any_host": true, 00:39:28.529 "hosts": [], 00:39:28.529 "serial_number": "SPDK00000000000001", 00:39:28.529 "model_number": "SPDK bdev Controller", 00:39:28.529 "max_namespaces": 1, 00:39:28.529 "min_cntlid": 1, 00:39:28.529 "max_cntlid": 65519, 00:39:28.529 "namespaces": [ 00:39:28.529 { 00:39:28.529 "nsid": 1, 00:39:28.529 "bdev_name": "Nvme0n1", 00:39:28.529 "name": "Nvme0n1", 00:39:28.529 "nguid": "28C0D3FA6EDE4A779F37EFD801584F0F", 00:39:28.529 "uuid": "28c0d3fa-6ede-4a77-9f37-efd801584f0f" 00:39:28.529 } 00:39:28.529 ] 00:39:28.529 } 00:39:28.529 ] 00:39:28.529 18:49:56 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.529 18:49:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:28.529 18:49:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:39:28.529 18:49:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:39:28.529 18:49:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:39:28.529 18:49:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:28.529 18:49:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:39:28.529 18:49:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:39:28.529 18:49:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:39:28.529 18:49:57 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:39:28.529 18:49:57 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:39:28.529 18:49:57 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:28.529 18:49:57 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.529 18:49:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:28.529 18:49:57 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.529 18:49:57 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:39:28.529 18:49:57 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:39:28.529 18:49:57 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:28.529 18:49:57 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:39:28.529 18:49:57 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:28.529 18:49:57 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:39:28.529 18:49:57 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:28.529 18:49:57 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:28.529 rmmod nvme_tcp 00:39:28.529 rmmod nvme_fabrics 00:39:28.529 rmmod nvme_keyring 00:39:28.790 18:49:57 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:28.790 18:49:57 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:39:28.790 18:49:57 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:39:28.790 18:49:57 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 1405791 ']' 00:39:28.790 18:49:57 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 1405791 00:39:28.790 18:49:57 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1405791 ']' 00:39:28.790 18:49:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1405791 00:39:28.790 18:49:57 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:39:28.790 18:49:57 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:28.790 18:49:57 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1405791 00:39:28.790 18:49:57 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:28.790 18:49:57 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:28.790 18:49:57 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1405791' 00:39:28.790 killing process with pid 1405791 00:39:28.790 18:49:57 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1405791 00:39:28.790 18:49:57 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1405791 00:39:30.698 18:49:58 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:30.698 18:49:58 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:30.698 18:49:58 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:30.698 18:49:58 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:39:30.698 18:49:58 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:39:30.698 18:49:58 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:30.698 18:49:58 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:39:30.698 18:49:58 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:30.698 18:49:58 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:30.698 18:49:58 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:30.698 18:49:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:30.698 18:49:58 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:32.608 18:50:00 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:32.608 00:39:32.608 real 0m20.824s 00:39:32.608 user 0m31.125s 00:39:32.608 sys 0m4.282s 00:39:32.608 18:50:00 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:32.608 18:50:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:32.608 ************************************ 00:39:32.608 END TEST nvmf_identify_passthru 00:39:32.608 ************************************ 00:39:32.608 18:50:00 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:32.608 18:50:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:32.608 18:50:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:32.608 18:50:00 -- common/autotest_common.sh@10 -- # set +x 00:39:32.608 ************************************ 00:39:32.608 START TEST nvmf_dif 00:39:32.608 ************************************ 00:39:32.608 18:50:01 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:39:32.608 * Looking for test storage... 00:39:32.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:32.608 18:50:01 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:32.608 18:50:01 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:39:32.608 18:50:01 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:32.868 18:50:01 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:39:32.868 18:50:01 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:32.868 18:50:01 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:32.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.868 --rc genhtml_branch_coverage=1 00:39:32.868 --rc genhtml_function_coverage=1 00:39:32.868 --rc genhtml_legend=1 00:39:32.868 --rc geninfo_all_blocks=1 00:39:32.868 --rc geninfo_unexecuted_blocks=1 00:39:32.868 00:39:32.868 ' 00:39:32.868 18:50:01 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:32.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.868 --rc genhtml_branch_coverage=1 00:39:32.868 --rc genhtml_function_coverage=1 00:39:32.868 --rc genhtml_legend=1 00:39:32.868 --rc geninfo_all_blocks=1 00:39:32.868 --rc geninfo_unexecuted_blocks=1 00:39:32.868 00:39:32.868 ' 00:39:32.868 18:50:01 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:32.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.868 --rc genhtml_branch_coverage=1 00:39:32.868 --rc genhtml_function_coverage=1 00:39:32.868 --rc genhtml_legend=1 00:39:32.868 --rc geninfo_all_blocks=1 00:39:32.868 --rc geninfo_unexecuted_blocks=1 00:39:32.868 00:39:32.868 ' 00:39:32.868 18:50:01 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:32.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.868 --rc genhtml_branch_coverage=1 00:39:32.868 --rc genhtml_function_coverage=1 00:39:32.868 --rc genhtml_legend=1 00:39:32.868 --rc geninfo_all_blocks=1 00:39:32.868 --rc geninfo_unexecuted_blocks=1 00:39:32.868 00:39:32.868 ' 00:39:32.868 18:50:01 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:32.868 18:50:01 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:32.868 18:50:01 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.868 18:50:01 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.868 18:50:01 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.868 18:50:01 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:39:32.868 18:50:01 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:32.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:32.868 18:50:01 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:39:32.868 18:50:01 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:39:32.868 18:50:01 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:39:32.868 18:50:01 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:39:32.868 18:50:01 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:32.868 18:50:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:32.868 18:50:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:32.868 18:50:01 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:39:32.868 18:50:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:39:36.156 Found 0000:84:00.0 (0x8086 - 0x159b) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:39:36.156 Found 0000:84:00.1 (0x8086 - 0x159b) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:39:36.156 Found net devices under 0000:84:00.0: cvl_0_0 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:39:36.156 Found net devices under 0000:84:00.1: cvl_0_1 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:36.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:36.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:39:36.156 00:39:36.156 --- 10.0.0.2 ping statistics --- 00:39:36.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:36.156 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:36.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:36.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:39:36.156 00:39:36.156 --- 10.0.0.1 ping statistics --- 00:39:36.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:36.156 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:39:36.156 18:50:04 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:37.533 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:39:37.533 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:39:37.533 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:39:37.533 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:39:37.533 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:39:37.533 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:39:37.533 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:39:37.533 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:39:37.533 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:39:37.533 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:39:37.533 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:39:37.533 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:39:37.533 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:39:37.533 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:39:37.533 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:39:37.533 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:39:37.533 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:39:37.793 18:50:06 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:37.793 18:50:06 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:37.793 18:50:06 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:37.793 18:50:06 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:37.793 18:50:06 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:37.793 18:50:06 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:37.793 18:50:06 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:39:37.793 18:50:06 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:39:37.793 18:50:06 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:37.793 18:50:06 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:37.793 18:50:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:37.793 18:50:06 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=1409339 00:39:37.793 18:50:06 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:39:37.793 18:50:06 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 1409339 00:39:37.793 18:50:06 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1409339 ']' 00:39:37.793 18:50:06 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.793 18:50:06 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:37.793 18:50:06 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.793 18:50:06 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:37.793 18:50:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:37.793 [2024-10-08 18:50:06.229597] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:39:37.793 [2024-10-08 18:50:06.229722] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:38.052 [2024-10-08 18:50:06.334184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.052 [2024-10-08 18:50:06.554605] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:38.052 [2024-10-08 18:50:06.554743] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:38.052 [2024-10-08 18:50:06.554781] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:38.052 [2024-10-08 18:50:06.554810] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:38.052 [2024-10-08 18:50:06.554835] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:38.052 [2024-10-08 18:50:06.556209] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.311 18:50:06 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:38.311 18:50:06 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:39:38.311 18:50:06 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:38.311 18:50:06 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:38.311 18:50:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:38.311 18:50:06 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:38.311 18:50:06 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:39:38.311 18:50:06 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:39:38.311 18:50:06 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.311 18:50:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:38.311 [2024-10-08 18:50:06.810832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:38.311 18:50:06 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.311 18:50:06 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:39:38.311 18:50:06 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:38.311 18:50:06 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:38.311 18:50:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:38.572 ************************************ 00:39:38.572 START TEST fio_dif_1_default 00:39:38.572 ************************************ 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:38.572 bdev_null0 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:38.572 [2024-10-08 18:50:06.891418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:38.572 { 00:39:38.572 "params": { 00:39:38.572 "name": "Nvme$subsystem", 00:39:38.572 "trtype": "$TEST_TRANSPORT", 00:39:38.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.572 "adrfam": "ipv4", 00:39:38.572 "trsvcid": "$NVMF_PORT", 00:39:38.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.572 "hdgst": ${hdgst:-false}, 00:39:38.572 "ddgst": ${ddgst:-false} 00:39:38.572 }, 00:39:38.572 "method": "bdev_nvme_attach_controller" 00:39:38.572 } 00:39:38.572 EOF 00:39:38.572 )") 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:38.572 "params": { 00:39:38.572 "name": "Nvme0", 00:39:38.572 "trtype": "tcp", 00:39:38.572 "traddr": "10.0.0.2", 00:39:38.572 "adrfam": "ipv4", 00:39:38.572 "trsvcid": "4420", 00:39:38.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:38.572 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:38.572 "hdgst": false, 00:39:38.572 "ddgst": false 00:39:38.572 }, 00:39:38.572 "method": "bdev_nvme_attach_controller" 00:39:38.572 }' 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:38.572 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:38.573 18:50:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:38.832 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:38.832 fio-3.35 00:39:38.832 Starting 1 thread 00:39:51.054 00:39:51.054 filename0: (groupid=0, jobs=1): err= 0: pid=1409570: Tue Oct 8 18:50:17 2024 00:39:51.054 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10024msec) 00:39:51.054 slat (nsec): min=7184, max=76814, avg=9993.68, stdev=4401.01 00:39:51.054 clat (usec): min=674, max=43076, avg=41734.50, stdev=3759.33 00:39:51.054 lat (usec): min=687, max=43090, avg=41744.50, stdev=3758.46 00:39:51.054 clat percentiles (usec): 00:39:51.054 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:39:51.054 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:39:51.054 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:39:51.054 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:39:51.054 | 99.99th=[43254] 00:39:51.054 bw ( KiB/s): min= 352, max= 416, per=99.72%, avg=382.40, stdev=16.33, samples=20 00:39:51.054 iops : min= 88, max= 104, avg=95.60, stdev= 4.08, samples=20 00:39:51.054 lat (usec) : 750=0.42% 00:39:51.055 lat (msec) : 2=0.42%, 50=99.17% 00:39:51.055 cpu : usr=90.75%, sys=8.86%, ctx=27, majf=0, minf=9 00:39:51.055 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:51.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:51.055 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:51.055 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:51.055 00:39:51.055 Run status group 0 (all jobs): 00:39:51.055 READ: bw=383KiB/s (392kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=3840KiB (3932kB), run=10024-10024msec 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.055 00:39:51.055 real 0m11.461s 00:39:51.055 user 0m10.406s 00:39:51.055 sys 0m1.334s 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:51.055 ************************************ 00:39:51.055 END TEST fio_dif_1_default 00:39:51.055 ************************************ 00:39:51.055 18:50:18 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:39:51.055 18:50:18 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:51.055 18:50:18 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:51.055 18:50:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:51.055 ************************************ 00:39:51.055 START TEST fio_dif_1_multi_subsystems 00:39:51.055 ************************************ 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:51.055 bdev_null0 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:51.055 [2024-10-08 18:50:18.414986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:51.055 bdev_null1 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:51.055 { 00:39:51.055 "params": { 00:39:51.055 "name": "Nvme$subsystem", 00:39:51.055 "trtype": "$TEST_TRANSPORT", 00:39:51.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.055 "adrfam": "ipv4", 00:39:51.055 "trsvcid": "$NVMF_PORT", 00:39:51.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.055 "hdgst": ${hdgst:-false}, 00:39:51.055 "ddgst": ${ddgst:-false} 00:39:51.055 }, 00:39:51.055 "method": "bdev_nvme_attach_controller" 00:39:51.055 } 00:39:51.055 EOF 00:39:51.055 )") 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:51.055 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:51.055 { 00:39:51.055 "params": { 00:39:51.055 "name": "Nvme$subsystem", 00:39:51.055 "trtype": "$TEST_TRANSPORT", 00:39:51.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:51.055 "adrfam": "ipv4", 00:39:51.055 "trsvcid": "$NVMF_PORT", 00:39:51.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:51.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:51.055 "hdgst": ${hdgst:-false}, 00:39:51.055 "ddgst": ${ddgst:-false} 00:39:51.055 }, 00:39:51.055 "method": "bdev_nvme_attach_controller" 00:39:51.055 } 00:39:51.055 EOF 00:39:51.056 )") 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:51.056 "params": { 00:39:51.056 "name": "Nvme0", 00:39:51.056 "trtype": "tcp", 00:39:51.056 "traddr": "10.0.0.2", 00:39:51.056 "adrfam": "ipv4", 00:39:51.056 "trsvcid": "4420", 00:39:51.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:51.056 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:51.056 "hdgst": false, 00:39:51.056 "ddgst": false 00:39:51.056 }, 00:39:51.056 "method": "bdev_nvme_attach_controller" 00:39:51.056 },{ 00:39:51.056 "params": { 00:39:51.056 "name": "Nvme1", 00:39:51.056 "trtype": "tcp", 00:39:51.056 "traddr": "10.0.0.2", 00:39:51.056 "adrfam": "ipv4", 00:39:51.056 "trsvcid": "4420", 00:39:51.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:51.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:51.056 "hdgst": false, 00:39:51.056 "ddgst": false 00:39:51.056 }, 00:39:51.056 "method": "bdev_nvme_attach_controller" 00:39:51.056 }' 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:51.056 18:50:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:51.056 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:51.056 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:51.056 fio-3.35 00:39:51.056 Starting 2 threads 00:40:03.284 00:40:03.284 filename0: (groupid=0, jobs=1): err= 0: pid=1410975: Tue Oct 8 18:50:29 2024 00:40:03.284 read: IOPS=203, BW=816KiB/s (835kB/s)(8176KiB/10021msec) 00:40:03.284 slat (nsec): min=9188, max=41534, avg=13559.66, stdev=3637.23 00:40:03.284 clat (usec): min=614, max=42543, avg=19568.04, stdev=20189.51 00:40:03.284 lat (usec): min=623, max=42562, avg=19581.60, stdev=20189.43 00:40:03.284 clat percentiles (usec): 00:40:03.284 | 1.00th=[ 693], 5.00th=[ 725], 10.00th=[ 742], 20.00th=[ 766], 00:40:03.284 | 30.00th=[ 783], 40.00th=[ 824], 50.00th=[ 873], 60.00th=[41157], 00:40:03.284 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:40:03.284 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:40:03.284 | 99.99th=[42730] 00:40:03.284 bw ( KiB/s): min= 704, max= 896, per=50.60%, avg=816.00, stdev=55.43, samples=20 00:40:03.284 iops : min= 176, max= 224, avg=204.00, stdev=13.86, samples=20 00:40:03.284 lat (usec) : 750=13.45%, 1000=38.85% 00:40:03.284 lat (msec) : 2=1.13%, 4=0.20%, 50=46.38% 00:40:03.284 cpu : usr=94.83%, sys=4.81%, ctx=13, majf=0, minf=159 00:40:03.284 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:03.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.284 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:03.284 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:03.284 filename1: (groupid=0, jobs=1): err= 0: pid=1410976: Tue Oct 8 18:50:29 2024 00:40:03.284 read: IOPS=199, BW=798KiB/s (817kB/s)(7984KiB/10004msec) 00:40:03.284 slat (nsec): min=9143, max=41100, avg=13391.13, stdev=3676.37 00:40:03.284 clat (usec): min=649, max=42468, avg=20005.90, stdev=20219.19 00:40:03.284 lat (usec): min=661, max=42505, avg=20019.29, stdev=20219.11 00:40:03.284 clat percentiles (usec): 00:40:03.284 | 1.00th=[ 693], 5.00th=[ 717], 10.00th=[ 725], 20.00th=[ 750], 00:40:03.284 | 30.00th=[ 775], 40.00th=[ 840], 50.00th=[ 889], 60.00th=[41157], 00:40:03.284 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:40:03.284 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:40:03.284 | 99.99th=[42730] 00:40:03.284 bw ( KiB/s): min= 704, max= 1024, per=49.36%, avg=796.80, stdev=79.00, samples=20 00:40:03.284 iops : min= 176, max= 256, avg=199.20, stdev=19.75, samples=20 00:40:03.284 lat (usec) : 750=21.14%, 1000=31.21% 00:40:03.284 lat (msec) : 2=0.15%, 50=47.49% 00:40:03.284 cpu : usr=94.73%, sys=4.90%, ctx=17, majf=0, minf=143 00:40:03.284 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:03.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.284 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:03.284 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:03.284 00:40:03.284 Run status group 0 (all jobs): 00:40:03.284 READ: bw=1613KiB/s (1651kB/s), 798KiB/s-816KiB/s (817kB/s-835kB/s), io=15.8MiB (16.5MB), run=10004-10021msec 00:40:03.284 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:03.284 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:03.284 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:03.284 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:03.284 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:03.284 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:03.284 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.284 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:03.284 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.284 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:03.284 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.285 00:40:03.285 real 0m11.763s 00:40:03.285 user 0m20.556s 00:40:03.285 sys 0m1.420s 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:03.285 18:50:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:03.285 ************************************ 00:40:03.285 END TEST fio_dif_1_multi_subsystems 00:40:03.285 ************************************ 00:40:03.285 18:50:30 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:03.285 18:50:30 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:03.285 18:50:30 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:03.285 18:50:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:03.285 ************************************ 00:40:03.285 START TEST fio_dif_rand_params 00:40:03.285 ************************************ 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:03.285 bdev_null0 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:03.285 [2024-10-08 18:50:30.257863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:03.285 { 00:40:03.285 "params": { 00:40:03.285 "name": "Nvme$subsystem", 00:40:03.285 "trtype": "$TEST_TRANSPORT", 00:40:03.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:03.285 "adrfam": "ipv4", 00:40:03.285 "trsvcid": "$NVMF_PORT", 00:40:03.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:03.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:03.285 "hdgst": ${hdgst:-false}, 00:40:03.285 "ddgst": ${ddgst:-false} 00:40:03.285 }, 00:40:03.285 "method": "bdev_nvme_attach_controller" 00:40:03.285 } 00:40:03.285 EOF 00:40:03.285 )") 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:03.285 "params": { 00:40:03.285 "name": "Nvme0", 00:40:03.285 "trtype": "tcp", 00:40:03.285 "traddr": "10.0.0.2", 00:40:03.285 "adrfam": "ipv4", 00:40:03.285 "trsvcid": "4420", 00:40:03.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:03.285 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:03.285 "hdgst": false, 00:40:03.285 "ddgst": false 00:40:03.285 }, 00:40:03.285 "method": "bdev_nvme_attach_controller" 00:40:03.285 }' 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:03.285 18:50:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:03.285 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:03.285 ... 00:40:03.285 fio-3.35 00:40:03.285 Starting 3 threads 00:40:08.552 00:40:08.552 filename0: (groupid=0, jobs=1): err= 0: pid=1412361: Tue Oct 8 18:50:36 2024 00:40:08.552 read: IOPS=161, BW=20.2MiB/s (21.2MB/s)(102MiB/5053msec) 00:40:08.552 slat (nsec): min=4915, max=88988, avg=23767.54, stdev=9736.72 00:40:08.552 clat (usec): min=6638, max=70126, avg=18494.96, stdev=9427.70 00:40:08.552 lat (usec): min=6654, max=70159, avg=18518.73, stdev=9429.29 00:40:08.552 clat percentiles (usec): 00:40:08.552 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11338], 00:40:08.552 | 30.00th=[12125], 40.00th=[13042], 50.00th=[13960], 60.00th=[16319], 00:40:08.552 | 70.00th=[25035], 80.00th=[27395], 90.00th=[29230], 95.00th=[30540], 00:40:08.552 | 99.00th=[56361], 99.50th=[58459], 99.90th=[69731], 99.95th=[69731], 00:40:08.552 | 99.99th=[69731] 00:40:08.552 bw ( KiB/s): min=13056, max=30720, per=34.83%, avg=20812.80, stdev=7443.15, samples=10 00:40:08.552 iops : min= 102, max= 240, avg=162.60, stdev=58.15, samples=10 00:40:08.552 lat (msec) : 10=7.11%, 20=57.60%, 50=33.58%, 100=1.72% 00:40:08.552 cpu : usr=91.39%, sys=6.16%, ctx=279, majf=0, minf=85 00:40:08.552 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:08.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.552 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:08.552 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:08.552 filename0: (groupid=0, jobs=1): err= 0: pid=1412362: Tue Oct 8 18:50:36 2024 00:40:08.552 read: IOPS=157, BW=19.7MiB/s (20.7MB/s)(99.6MiB/5050msec) 00:40:08.552 slat (nsec): min=5262, max=67370, avg=24973.87, stdev=11661.03 00:40:08.552 clat (usec): min=5440, max=66761, avg=18923.91, stdev=9244.82 00:40:08.552 lat (usec): min=5449, max=66796, avg=18948.88, stdev=9253.07 00:40:08.552 clat percentiles (usec): 00:40:08.552 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11338], 00:40:08.552 | 30.00th=[12518], 40.00th=[13435], 50.00th=[14746], 60.00th=[16188], 00:40:08.552 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30278], 95.00th=[32375], 00:40:08.552 | 99.00th=[51643], 99.50th=[53216], 99.90th=[66847], 99.95th=[66847], 00:40:08.552 | 99.99th=[66847] 00:40:08.552 bw ( KiB/s): min=12288, max=33024, per=34.02%, avg=20330.70, stdev=7361.44, samples=10 00:40:08.552 iops : min= 96, max= 258, avg=158.80, stdev=57.51, samples=10 00:40:08.552 lat (msec) : 10=6.65%, 20=58.72%, 50=32.87%, 100=1.76% 00:40:08.552 cpu : usr=93.62%, sys=5.70%, ctx=9, majf=0, minf=82 00:40:08.552 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:08.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.552 issued rwts: total=797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:08.552 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:08.552 filename0: (groupid=0, jobs=1): err= 0: pid=1412363: Tue Oct 8 18:50:36 2024 00:40:08.552 read: IOPS=148, BW=18.6MiB/s (19.5MB/s)(93.2MiB/5010msec) 00:40:08.552 slat (nsec): min=4849, max=60003, avg=24193.18, stdev=11162.46 00:40:08.552 clat (usec): min=4825, max=45941, avg=20107.54, stdev=11021.07 00:40:08.552 lat (usec): min=4840, max=45957, avg=20131.73, stdev=11030.82 00:40:08.552 clat percentiles (usec): 00:40:08.552 | 1.00th=[ 5014], 5.00th=[ 8586], 10.00th=[10552], 20.00th=[11863], 00:40:08.552 | 30.00th=[12911], 40.00th=[13829], 50.00th=[14746], 60.00th=[16450], 00:40:08.552 | 70.00th=[21103], 80.00th=[34866], 90.00th=[38011], 95.00th=[40109], 00:40:08.552 | 99.00th=[42730], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:40:08.552 | 99.99th=[45876] 00:40:08.552 bw ( KiB/s): min= 9984, max=33859, per=31.84%, avg=19027.50, stdev=9014.84, samples=10 00:40:08.552 iops : min= 78, max= 264, avg=148.60, stdev=70.33, samples=10 00:40:08.552 lat (msec) : 10=8.18%, 20=60.46%, 50=31.37% 00:40:08.552 cpu : usr=94.21%, sys=5.13%, ctx=10, majf=0, minf=139 00:40:08.552 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:08.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.552 issued rwts: total=746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:08.552 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:08.552 00:40:08.552 Run status group 0 (all jobs): 00:40:08.552 READ: bw=58.4MiB/s (61.2MB/s), 18.6MiB/s-20.2MiB/s (19.5MB/s-21.2MB/s), io=295MiB (309MB), run=5010-5053msec 00:40:08.552 18:50:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:40:08.552 18:50:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:08.552 18:50:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:08.552 18:50:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:08.552 18:50:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:08.552 18:50:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:08.552 18:50:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.552 18:50:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.552 bdev_null0 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.552 [2024-10-08 18:50:37.067595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.552 bdev_null1 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.552 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.811 bdev_null2 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:08.811 { 00:40:08.811 "params": { 00:40:08.811 "name": "Nvme$subsystem", 00:40:08.811 "trtype": "$TEST_TRANSPORT", 00:40:08.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:08.811 "adrfam": "ipv4", 00:40:08.811 "trsvcid": "$NVMF_PORT", 00:40:08.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:08.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:08.811 "hdgst": ${hdgst:-false}, 00:40:08.811 "ddgst": ${ddgst:-false} 00:40:08.811 }, 00:40:08.811 "method": "bdev_nvme_attach_controller" 00:40:08.811 } 00:40:08.811 EOF 00:40:08.811 )") 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:08.811 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:08.812 { 00:40:08.812 "params": { 00:40:08.812 "name": "Nvme$subsystem", 00:40:08.812 "trtype": "$TEST_TRANSPORT", 00:40:08.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:08.812 "adrfam": "ipv4", 00:40:08.812 "trsvcid": "$NVMF_PORT", 00:40:08.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:08.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:08.812 "hdgst": ${hdgst:-false}, 00:40:08.812 "ddgst": ${ddgst:-false} 00:40:08.812 }, 00:40:08.812 "method": "bdev_nvme_attach_controller" 00:40:08.812 } 00:40:08.812 EOF 00:40:08.812 )") 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:08.812 { 00:40:08.812 "params": { 00:40:08.812 "name": "Nvme$subsystem", 00:40:08.812 "trtype": "$TEST_TRANSPORT", 00:40:08.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:08.812 "adrfam": "ipv4", 00:40:08.812 "trsvcid": "$NVMF_PORT", 00:40:08.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:08.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:08.812 "hdgst": ${hdgst:-false}, 00:40:08.812 "ddgst": ${ddgst:-false} 00:40:08.812 }, 00:40:08.812 "method": "bdev_nvme_attach_controller" 00:40:08.812 } 00:40:08.812 EOF 00:40:08.812 )") 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:08.812 "params": { 00:40:08.812 "name": "Nvme0", 00:40:08.812 "trtype": "tcp", 00:40:08.812 "traddr": "10.0.0.2", 00:40:08.812 "adrfam": "ipv4", 00:40:08.812 "trsvcid": "4420", 00:40:08.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:08.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:08.812 "hdgst": false, 00:40:08.812 "ddgst": false 00:40:08.812 }, 00:40:08.812 "method": "bdev_nvme_attach_controller" 00:40:08.812 },{ 00:40:08.812 "params": { 00:40:08.812 "name": "Nvme1", 00:40:08.812 "trtype": "tcp", 00:40:08.812 "traddr": "10.0.0.2", 00:40:08.812 "adrfam": "ipv4", 00:40:08.812 "trsvcid": "4420", 00:40:08.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:08.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:08.812 "hdgst": false, 00:40:08.812 "ddgst": false 00:40:08.812 }, 00:40:08.812 "method": "bdev_nvme_attach_controller" 00:40:08.812 },{ 00:40:08.812 "params": { 00:40:08.812 "name": "Nvme2", 00:40:08.812 "trtype": "tcp", 00:40:08.812 "traddr": "10.0.0.2", 00:40:08.812 "adrfam": "ipv4", 00:40:08.812 "trsvcid": "4420", 00:40:08.812 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:40:08.812 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:40:08.812 "hdgst": false, 00:40:08.812 "ddgst": false 00:40:08.812 }, 00:40:08.812 "method": "bdev_nvme_attach_controller" 00:40:08.812 }' 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:08.812 18:50:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:09.071 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:09.071 ... 00:40:09.071 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:09.071 ... 00:40:09.071 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:40:09.071 ... 00:40:09.071 fio-3.35 00:40:09.071 Starting 24 threads 00:40:21.288 00:40:21.288 filename0: (groupid=0, jobs=1): err= 0: pid=1413352: Tue Oct 8 18:50:49 2024 00:40:21.288 read: IOPS=458, BW=1834KiB/s (1878kB/s)(17.9MiB/10017msec) 00:40:21.288 slat (usec): min=8, max=118, avg=32.68, stdev=11.20 00:40:21.288 clat (usec): min=23916, max=44485, avg=34614.44, stdev=3012.27 00:40:21.288 lat (usec): min=23937, max=44512, avg=34647.13, stdev=3012.47 00:40:21.288 clat percentiles (usec): 00:40:21.288 | 1.00th=[30016], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:40:21.288 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.288 | 70.00th=[33817], 80.00th=[34866], 90.00th=[38011], 95.00th=[42730], 00:40:21.288 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:40:21.288 | 99.99th=[44303] 00:40:21.288 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1830.40, stdev=144.46, samples=20 00:40:21.288 iops : min= 352, max= 480, avg=457.60, stdev=36.11, samples=20 00:40:21.288 lat (msec) : 50=100.00% 00:40:21.288 cpu : usr=97.28%, sys=1.79%, ctx=242, majf=0, minf=32 00:40:21.288 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.288 filename0: (groupid=0, jobs=1): err= 0: pid=1413353: Tue Oct 8 18:50:49 2024 00:40:21.288 read: IOPS=459, BW=1836KiB/s (1880kB/s)(17.9MiB/10004msec) 00:40:21.288 slat (usec): min=4, max=111, avg=22.16, stdev=13.03 00:40:21.288 clat (usec): min=8966, max=43992, avg=34666.65, stdev=3242.63 00:40:21.288 lat (usec): min=8984, max=44015, avg=34688.81, stdev=3241.15 00:40:21.288 clat percentiles (usec): 00:40:21.288 | 1.00th=[32375], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:40:21.288 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:40:21.288 | 70.00th=[33817], 80.00th=[34866], 90.00th=[39060], 95.00th=[43254], 00:40:21.288 | 99.00th=[43254], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:40:21.288 | 99.99th=[43779] 00:40:21.288 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1832.42, stdev=141.85, samples=19 00:40:21.288 iops : min= 352, max= 480, avg=458.11, stdev=35.46, samples=19 00:40:21.288 lat (msec) : 10=0.04%, 20=0.30%, 50=99.65% 00:40:21.288 cpu : usr=98.24%, sys=1.26%, ctx=45, majf=0, minf=28 00:40:21.288 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.288 filename0: (groupid=0, jobs=1): err= 0: pid=1413354: Tue Oct 8 18:50:49 2024 00:40:21.288 read: IOPS=458, BW=1835KiB/s (1879kB/s)(17.9MiB/10011msec) 00:40:21.288 slat (nsec): min=4657, max=58120, avg=29983.99, stdev=7317.63 00:40:21.288 clat (usec): min=12907, max=52487, avg=34604.77, stdev=3369.73 00:40:21.288 lat (usec): min=12915, max=52505, avg=34634.76, stdev=3369.44 00:40:21.288 clat percentiles (usec): 00:40:21.288 | 1.00th=[30016], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:40:21.288 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.288 | 70.00th=[33817], 80.00th=[34866], 90.00th=[38011], 95.00th=[42730], 00:40:21.288 | 99.00th=[43779], 99.50th=[44303], 99.90th=[52691], 99.95th=[52691], 00:40:21.288 | 99.99th=[52691] 00:40:21.288 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1825.68, stdev=158.74, samples=19 00:40:21.288 iops : min= 352, max= 480, avg=456.42, stdev=39.69, samples=19 00:40:21.288 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:40:21.288 cpu : usr=98.35%, sys=1.21%, ctx=14, majf=0, minf=14 00:40:21.288 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.288 filename0: (groupid=0, jobs=1): err= 0: pid=1413355: Tue Oct 8 18:50:49 2024 00:40:21.288 read: IOPS=457, BW=1830KiB/s (1874kB/s)(17.9MiB/10002msec) 00:40:21.288 slat (nsec): min=4521, max=66597, avg=33698.65, stdev=9297.60 00:40:21.288 clat (usec): min=25905, max=50455, avg=34682.35, stdev=3044.39 00:40:21.288 lat (usec): min=25942, max=50468, avg=34716.05, stdev=3042.59 00:40:21.288 clat percentiles (usec): 00:40:21.288 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:40:21.288 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.288 | 70.00th=[33817], 80.00th=[34866], 90.00th=[39584], 95.00th=[42730], 00:40:21.288 | 99.00th=[43254], 99.50th=[43254], 99.90th=[50594], 99.95th=[50594], 00:40:21.288 | 99.99th=[50594] 00:40:21.288 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1825.68, stdev=158.74, samples=19 00:40:21.288 iops : min= 352, max= 480, avg=456.42, stdev=39.69, samples=19 00:40:21.288 lat (msec) : 50=99.65%, 100=0.35% 00:40:21.288 cpu : usr=98.38%, sys=1.20%, ctx=14, majf=0, minf=16 00:40:21.288 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.288 filename0: (groupid=0, jobs=1): err= 0: pid=1413356: Tue Oct 8 18:50:49 2024 00:40:21.288 read: IOPS=457, BW=1830KiB/s (1874kB/s)(17.9MiB/10003msec) 00:40:21.288 slat (nsec): min=8610, max=64131, avg=16985.14, stdev=7595.66 00:40:21.288 clat (usec): min=28643, max=49302, avg=34835.34, stdev=2984.66 00:40:21.288 lat (usec): min=28653, max=49339, avg=34852.33, stdev=2983.77 00:40:21.288 clat percentiles (usec): 00:40:21.288 | 1.00th=[33162], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:40:21.288 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:40:21.288 | 70.00th=[33817], 80.00th=[34866], 90.00th=[39584], 95.00th=[43254], 00:40:21.288 | 99.00th=[43254], 99.50th=[43254], 99.90th=[49021], 99.95th=[49021], 00:40:21.288 | 99.99th=[49546] 00:40:21.288 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1825.84, stdev=146.64, samples=19 00:40:21.288 iops : min= 352, max= 480, avg=456.42, stdev=36.71, samples=19 00:40:21.288 lat (msec) : 50=100.00% 00:40:21.288 cpu : usr=98.25%, sys=1.34%, ctx=12, majf=0, minf=23 00:40:21.288 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.288 filename0: (groupid=0, jobs=1): err= 0: pid=1413357: Tue Oct 8 18:50:49 2024 00:40:21.288 read: IOPS=457, BW=1830KiB/s (1874kB/s)(17.9MiB/10004msec) 00:40:21.288 slat (usec): min=5, max=115, avg=38.84, stdev=16.77 00:40:21.288 clat (usec): min=13804, max=75064, avg=34628.53, stdev=3872.12 00:40:21.288 lat (usec): min=13860, max=75082, avg=34667.38, stdev=3876.84 00:40:21.288 clat percentiles (usec): 00:40:21.288 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:40:21.288 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.288 | 70.00th=[33817], 80.00th=[34341], 90.00th=[39584], 95.00th=[42730], 00:40:21.288 | 99.00th=[43254], 99.50th=[43254], 99.90th=[74974], 99.95th=[74974], 00:40:21.288 | 99.99th=[74974] 00:40:21.288 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1818.95, stdev=157.23, samples=19 00:40:21.288 iops : min= 352, max= 480, avg=454.74, stdev=39.31, samples=19 00:40:21.288 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:40:21.288 cpu : usr=98.16%, sys=1.26%, ctx=70, majf=0, minf=14 00:40:21.288 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.288 filename0: (groupid=0, jobs=1): err= 0: pid=1413358: Tue Oct 8 18:50:49 2024 00:40:21.288 read: IOPS=458, BW=1833KiB/s (1878kB/s)(17.9MiB/10018msec) 00:40:21.288 slat (usec): min=9, max=105, avg=30.51, stdev= 9.40 00:40:21.288 clat (usec): min=22241, max=50046, avg=34654.33, stdev=3042.58 00:40:21.288 lat (usec): min=22255, max=50064, avg=34684.84, stdev=3040.73 00:40:21.288 clat percentiles (usec): 00:40:21.288 | 1.00th=[29754], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:40:21.288 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.288 | 70.00th=[33817], 80.00th=[34866], 90.00th=[38011], 95.00th=[43254], 00:40:21.288 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:40:21.288 | 99.99th=[50070] 00:40:21.288 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1830.40, stdev=144.46, samples=20 00:40:21.288 iops : min= 352, max= 480, avg=457.60, stdev=36.11, samples=20 00:40:21.288 lat (msec) : 50=99.98%, 100=0.02% 00:40:21.288 cpu : usr=98.33%, sys=1.25%, ctx=17, majf=0, minf=31 00:40:21.288 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.288 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.288 filename0: (groupid=0, jobs=1): err= 0: pid=1413359: Tue Oct 8 18:50:49 2024 00:40:21.288 read: IOPS=457, BW=1830KiB/s (1874kB/s)(17.9MiB/10004msec) 00:40:21.288 slat (nsec): min=4670, max=64126, avg=33103.88, stdev=10354.12 00:40:21.288 clat (usec): min=25410, max=52613, avg=34706.77, stdev=3120.94 00:40:21.288 lat (usec): min=25423, max=52633, avg=34739.88, stdev=3118.48 00:40:21.288 clat percentiles (usec): 00:40:21.288 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:40:21.289 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.289 | 70.00th=[33817], 80.00th=[34866], 90.00th=[39584], 95.00th=[42730], 00:40:21.289 | 99.00th=[43254], 99.50th=[43254], 99.90th=[52691], 99.95th=[52691], 00:40:21.289 | 99.99th=[52691] 00:40:21.289 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1818.95, stdev=157.23, samples=19 00:40:21.289 iops : min= 352, max= 480, avg=454.74, stdev=39.31, samples=19 00:40:21.289 lat (msec) : 50=99.65%, 100=0.35% 00:40:21.289 cpu : usr=98.33%, sys=1.25%, ctx=14, majf=0, minf=20 00:40:21.289 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.289 filename1: (groupid=0, jobs=1): err= 0: pid=1413360: Tue Oct 8 18:50:49 2024 00:40:21.289 read: IOPS=459, BW=1837KiB/s (1881kB/s)(17.9MiB/10001msec) 00:40:21.289 slat (usec): min=9, max=110, avg=26.02, stdev= 9.01 00:40:21.289 clat (usec): min=11213, max=44688, avg=34617.69, stdev=3240.93 00:40:21.289 lat (usec): min=11232, max=44722, avg=34643.70, stdev=3239.17 00:40:21.289 clat percentiles (usec): 00:40:21.289 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:40:21.289 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.289 | 70.00th=[33817], 80.00th=[34341], 90.00th=[38011], 95.00th=[43254], 00:40:21.289 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:40:21.289 | 99.99th=[44827] 00:40:21.289 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1832.42, stdev=141.85, samples=19 00:40:21.289 iops : min= 352, max= 480, avg=458.11, stdev=35.46, samples=19 00:40:21.289 lat (msec) : 20=0.39%, 50=99.61% 00:40:21.289 cpu : usr=96.53%, sys=2.27%, ctx=194, majf=0, minf=37 00:40:21.289 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.289 filename1: (groupid=0, jobs=1): err= 0: pid=1413361: Tue Oct 8 18:50:49 2024 00:40:21.289 read: IOPS=458, BW=1834KiB/s (1878kB/s)(17.9MiB/10017msec) 00:40:21.289 slat (usec): min=9, max=124, avg=34.63, stdev=12.37 00:40:21.289 clat (usec): min=20294, max=60015, avg=34597.55, stdev=3566.44 00:40:21.289 lat (usec): min=20305, max=60048, avg=34632.19, stdev=3567.71 00:40:21.289 clat percentiles (usec): 00:40:21.289 | 1.00th=[23987], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:40:21.289 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.289 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42730], 95.00th=[42730], 00:40:21.289 | 99.00th=[45876], 99.50th=[46400], 99.90th=[49546], 99.95th=[60031], 00:40:21.289 | 99.99th=[60031] 00:40:21.289 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1830.40, stdev=144.46, samples=20 00:40:21.289 iops : min= 352, max= 480, avg=457.60, stdev=36.11, samples=20 00:40:21.289 lat (msec) : 50=99.91%, 100=0.09% 00:40:21.289 cpu : usr=97.94%, sys=1.60%, ctx=31, majf=0, minf=28 00:40:21.289 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:40:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.289 filename1: (groupid=0, jobs=1): err= 0: pid=1413362: Tue Oct 8 18:50:49 2024 00:40:21.289 read: IOPS=458, BW=1836KiB/s (1880kB/s)(17.9MiB/10005msec) 00:40:21.289 slat (usec): min=6, max=108, avg=24.14, stdev= 9.46 00:40:21.289 clat (usec): min=18287, max=43466, avg=34651.16, stdev=3035.78 00:40:21.289 lat (usec): min=18298, max=43489, avg=34675.30, stdev=3034.34 00:40:21.289 clat percentiles (usec): 00:40:21.289 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:40:21.289 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.289 | 70.00th=[33817], 80.00th=[34866], 90.00th=[38536], 95.00th=[42730], 00:40:21.289 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:40:21.289 | 99.99th=[43254] 00:40:21.289 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1832.42, stdev=141.85, samples=19 00:40:21.289 iops : min= 352, max= 480, avg=458.11, stdev=35.46, samples=19 00:40:21.289 lat (msec) : 20=0.39%, 50=99.61% 00:40:21.289 cpu : usr=97.30%, sys=1.87%, ctx=111, majf=0, minf=30 00:40:21.289 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.289 filename1: (groupid=0, jobs=1): err= 0: pid=1413363: Tue Oct 8 18:50:49 2024 00:40:21.289 read: IOPS=458, BW=1834KiB/s (1878kB/s)(17.9MiB/10017msec) 00:40:21.289 slat (usec): min=11, max=114, avg=37.04, stdev=17.38 00:40:21.289 clat (usec): min=23973, max=44269, avg=34574.90, stdev=2915.23 00:40:21.289 lat (usec): min=24006, max=44289, avg=34611.94, stdev=2924.23 00:40:21.289 clat percentiles (usec): 00:40:21.289 | 1.00th=[30016], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:40:21.289 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.289 | 70.00th=[33817], 80.00th=[34866], 90.00th=[38011], 95.00th=[42730], 00:40:21.289 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:40:21.289 | 99.99th=[44303] 00:40:21.289 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1830.40, stdev=144.46, samples=20 00:40:21.289 iops : min= 352, max= 480, avg=457.60, stdev=36.11, samples=20 00:40:21.289 lat (msec) : 50=100.00% 00:40:21.289 cpu : usr=97.61%, sys=1.61%, ctx=124, majf=0, minf=21 00:40:21.289 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.289 filename1: (groupid=0, jobs=1): err= 0: pid=1413364: Tue Oct 8 18:50:49 2024 00:40:21.289 read: IOPS=458, BW=1833KiB/s (1877kB/s)(17.9MiB/10015msec) 00:40:21.289 slat (usec): min=4, max=119, avg=53.02, stdev=28.16 00:40:21.289 clat (usec): min=20393, max=56996, avg=34448.65, stdev=3409.97 00:40:21.289 lat (usec): min=20415, max=57037, avg=34501.67, stdev=3402.20 00:40:21.289 clat percentiles (usec): 00:40:21.289 | 1.00th=[23462], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:40:21.289 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:40:21.289 | 70.00th=[33817], 80.00th=[34341], 90.00th=[40109], 95.00th=[42730], 00:40:21.289 | 99.00th=[43254], 99.50th=[44303], 99.90th=[56886], 99.95th=[56886], 00:40:21.289 | 99.99th=[56886] 00:40:21.289 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1824.84, stdev=146.30, samples=19 00:40:21.289 iops : min= 352, max= 480, avg=456.21, stdev=36.58, samples=19 00:40:21.289 lat (msec) : 50=99.87%, 100=0.13% 00:40:21.289 cpu : usr=98.44%, sys=1.10%, ctx=13, majf=0, minf=34 00:40:21.289 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:40:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 issued rwts: total=4590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.289 filename1: (groupid=0, jobs=1): err= 0: pid=1413365: Tue Oct 8 18:50:49 2024 00:40:21.289 read: IOPS=458, BW=1835KiB/s (1879kB/s)(17.9MiB/10012msec) 00:40:21.289 slat (usec): min=4, max=112, avg=31.63, stdev=11.69 00:40:21.289 clat (usec): min=12938, max=54199, avg=34587.96, stdev=3412.49 00:40:21.289 lat (usec): min=12962, max=54213, avg=34619.59, stdev=3411.58 00:40:21.289 clat percentiles (usec): 00:40:21.289 | 1.00th=[30016], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:40:21.289 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.289 | 70.00th=[33817], 80.00th=[34866], 90.00th=[38011], 95.00th=[42730], 00:40:21.289 | 99.00th=[43779], 99.50th=[44303], 99.90th=[54264], 99.95th=[54264], 00:40:21.289 | 99.99th=[54264] 00:40:21.289 bw ( KiB/s): min= 1536, max= 1920, per=4.14%, avg=1818.95, stdev=138.77, samples=19 00:40:21.289 iops : min= 384, max= 480, avg=454.74, stdev=34.69, samples=19 00:40:21.289 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:40:21.289 cpu : usr=97.30%, sys=1.81%, ctx=163, majf=0, minf=28 00:40:21.289 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.289 filename1: (groupid=0, jobs=1): err= 0: pid=1413366: Tue Oct 8 18:50:49 2024 00:40:21.289 read: IOPS=457, BW=1829KiB/s (1873kB/s)(17.9MiB/10005msec) 00:40:21.289 slat (nsec): min=6693, max=73276, avg=33141.75, stdev=8629.83 00:40:21.289 clat (usec): min=14911, max=84563, avg=34683.60, stdev=3972.23 00:40:21.289 lat (usec): min=14928, max=84579, avg=34716.74, stdev=3971.12 00:40:21.289 clat percentiles (usec): 00:40:21.289 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:40:21.289 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.289 | 70.00th=[33817], 80.00th=[34866], 90.00th=[39584], 95.00th=[42730], 00:40:21.289 | 99.00th=[43254], 99.50th=[43254], 99.90th=[74974], 99.95th=[74974], 00:40:21.289 | 99.99th=[84411] 00:40:21.289 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1818.95, stdev=157.23, samples=19 00:40:21.289 iops : min= 352, max= 480, avg=454.74, stdev=39.31, samples=19 00:40:21.289 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:40:21.289 cpu : usr=96.75%, sys=2.12%, ctx=161, majf=0, minf=27 00:40:21.289 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.289 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.289 filename1: (groupid=0, jobs=1): err= 0: pid=1413367: Tue Oct 8 18:50:49 2024 00:40:21.289 read: IOPS=457, BW=1829KiB/s (1873kB/s)(17.9MiB/10005msec) 00:40:21.289 slat (nsec): min=5698, max=64351, avg=32973.11, stdev=8773.32 00:40:21.289 clat (usec): min=14949, max=75148, avg=34677.01, stdev=3922.40 00:40:21.289 lat (usec): min=14965, max=75164, avg=34709.98, stdev=3921.54 00:40:21.289 clat percentiles (usec): 00:40:21.289 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:40:21.289 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:40:21.290 | 70.00th=[33817], 80.00th=[34341], 90.00th=[39584], 95.00th=[42730], 00:40:21.290 | 99.00th=[43254], 99.50th=[43254], 99.90th=[74974], 99.95th=[74974], 00:40:21.290 | 99.99th=[74974] 00:40:21.290 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1818.95, stdev=157.23, samples=19 00:40:21.290 iops : min= 352, max= 480, avg=454.74, stdev=39.31, samples=19 00:40:21.290 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:40:21.290 cpu : usr=96.52%, sys=2.38%, ctx=140, majf=0, minf=29 00:40:21.290 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.290 filename2: (groupid=0, jobs=1): err= 0: pid=1413368: Tue Oct 8 18:50:49 2024 00:40:21.290 read: IOPS=456, BW=1828KiB/s (1872kB/s)(17.9MiB/10014msec) 00:40:21.290 slat (nsec): min=8831, max=83668, avg=28655.25, stdev=9312.93 00:40:21.290 clat (usec): min=22364, max=56900, avg=34739.18, stdev=3367.72 00:40:21.290 lat (usec): min=22448, max=56922, avg=34767.84, stdev=3367.82 00:40:21.290 clat percentiles (usec): 00:40:21.290 | 1.00th=[27395], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:40:21.290 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.290 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42730], 95.00th=[42730], 00:40:21.290 | 99.00th=[43779], 99.50th=[45351], 99.90th=[50594], 99.95th=[56886], 00:40:21.290 | 99.99th=[56886] 00:40:21.290 bw ( KiB/s): min= 1408, max= 1920, per=4.16%, avg=1829.60, stdev=154.89, samples=20 00:40:21.290 iops : min= 352, max= 480, avg=457.40, stdev=38.72, samples=20 00:40:21.290 lat (msec) : 50=99.56%, 100=0.44% 00:40:21.290 cpu : usr=98.38%, sys=1.16%, ctx=22, majf=0, minf=35 00:40:21.290 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:40:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.290 filename2: (groupid=0, jobs=1): err= 0: pid=1413369: Tue Oct 8 18:50:49 2024 00:40:21.290 read: IOPS=458, BW=1835KiB/s (1879kB/s)(17.9MiB/10008msec) 00:40:21.290 slat (nsec): min=4584, max=57100, avg=23892.90, stdev=11374.86 00:40:21.290 clat (usec): min=21963, max=44279, avg=34684.55, stdev=3011.23 00:40:21.290 lat (usec): min=21976, max=44303, avg=34708.44, stdev=3008.58 00:40:21.290 clat percentiles (usec): 00:40:21.290 | 1.00th=[30016], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:40:21.290 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.290 | 70.00th=[33817], 80.00th=[34866], 90.00th=[38011], 95.00th=[43254], 00:40:21.290 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:40:21.290 | 99.99th=[44303] 00:40:21.290 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1825.84, stdev=146.64, samples=19 00:40:21.290 iops : min= 352, max= 480, avg=456.42, stdev=36.71, samples=19 00:40:21.290 lat (msec) : 50=100.00% 00:40:21.290 cpu : usr=97.00%, sys=1.84%, ctx=181, majf=0, minf=20 00:40:21.290 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.290 filename2: (groupid=0, jobs=1): err= 0: pid=1413370: Tue Oct 8 18:50:49 2024 00:40:21.290 read: IOPS=458, BW=1833KiB/s (1877kB/s)(17.9MiB/10019msec) 00:40:21.290 slat (usec): min=4, max=107, avg=26.85, stdev= 9.20 00:40:21.290 clat (usec): min=20392, max=56033, avg=34648.25, stdev=3073.99 00:40:21.290 lat (usec): min=20402, max=56050, avg=34675.09, stdev=3074.42 00:40:21.290 clat percentiles (usec): 00:40:21.290 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:40:21.290 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.290 | 70.00th=[33817], 80.00th=[34341], 90.00th=[39060], 95.00th=[42730], 00:40:21.290 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:40:21.290 | 99.99th=[55837] 00:40:21.290 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1825.68, stdev=146.83, samples=19 00:40:21.290 iops : min= 352, max= 480, avg=456.42, stdev=36.71, samples=19 00:40:21.290 lat (msec) : 50=99.96%, 100=0.04% 00:40:21.290 cpu : usr=97.16%, sys=1.67%, ctx=187, majf=0, minf=29 00:40:21.290 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.290 filename2: (groupid=0, jobs=1): err= 0: pid=1413371: Tue Oct 8 18:50:49 2024 00:40:21.290 read: IOPS=459, BW=1837KiB/s (1881kB/s)(17.9MiB/10001msec) 00:40:21.290 slat (nsec): min=8956, max=95939, avg=28811.41, stdev=8548.77 00:40:21.290 clat (usec): min=11281, max=44767, avg=34587.93, stdev=3216.45 00:40:21.290 lat (usec): min=11300, max=44802, avg=34616.74, stdev=3215.76 00:40:21.290 clat percentiles (usec): 00:40:21.290 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:40:21.290 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.290 | 70.00th=[33817], 80.00th=[34341], 90.00th=[38011], 95.00th=[42730], 00:40:21.290 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:40:21.290 | 99.99th=[44827] 00:40:21.290 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1832.42, stdev=141.85, samples=19 00:40:21.290 iops : min= 352, max= 480, avg=458.11, stdev=35.46, samples=19 00:40:21.290 lat (msec) : 20=0.35%, 50=99.65% 00:40:21.290 cpu : usr=96.24%, sys=2.31%, ctx=320, majf=0, minf=29 00:40:21.290 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.290 filename2: (groupid=0, jobs=1): err= 0: pid=1413372: Tue Oct 8 18:50:49 2024 00:40:21.290 read: IOPS=457, BW=1830KiB/s (1874kB/s)(17.9MiB/10003msec) 00:40:21.290 slat (nsec): min=8544, max=97358, avg=33722.77, stdev=11304.70 00:40:21.290 clat (usec): min=14792, max=73315, avg=34665.77, stdev=3860.54 00:40:21.290 lat (usec): min=14860, max=73377, avg=34699.49, stdev=3861.52 00:40:21.290 clat percentiles (usec): 00:40:21.290 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:40:21.290 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.290 | 70.00th=[33817], 80.00th=[34866], 90.00th=[39584], 95.00th=[42730], 00:40:21.290 | 99.00th=[43254], 99.50th=[43254], 99.90th=[72877], 99.95th=[72877], 00:40:21.290 | 99.99th=[72877] 00:40:21.290 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1818.95, stdev=157.23, samples=19 00:40:21.290 iops : min= 352, max= 480, avg=454.74, stdev=39.31, samples=19 00:40:21.290 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:40:21.290 cpu : usr=97.42%, sys=1.78%, ctx=109, majf=0, minf=19 00:40:21.290 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.290 filename2: (groupid=0, jobs=1): err= 0: pid=1413373: Tue Oct 8 18:50:49 2024 00:40:21.290 read: IOPS=457, BW=1830KiB/s (1874kB/s)(17.9MiB/10002msec) 00:40:21.290 slat (usec): min=9, max=107, avg=36.38, stdev=12.52 00:40:21.290 clat (usec): min=15063, max=73315, avg=34654.00, stdev=3841.15 00:40:21.290 lat (usec): min=15097, max=73366, avg=34690.38, stdev=3842.02 00:40:21.290 clat percentiles (usec): 00:40:21.290 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:40:21.290 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.290 | 70.00th=[33817], 80.00th=[34866], 90.00th=[39584], 95.00th=[42730], 00:40:21.290 | 99.00th=[43254], 99.50th=[43254], 99.90th=[72877], 99.95th=[72877], 00:40:21.290 | 99.99th=[72877] 00:40:21.290 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1818.95, stdev=157.23, samples=19 00:40:21.290 iops : min= 352, max= 480, avg=454.74, stdev=39.31, samples=19 00:40:21.290 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:40:21.290 cpu : usr=98.19%, sys=1.39%, ctx=18, majf=0, minf=35 00:40:21.290 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.290 filename2: (groupid=0, jobs=1): err= 0: pid=1413374: Tue Oct 8 18:50:49 2024 00:40:21.290 read: IOPS=457, BW=1829KiB/s (1873kB/s)(17.9MiB/10006msec) 00:40:21.290 slat (usec): min=8, max=123, avg=56.02, stdev=24.79 00:40:21.290 clat (usec): min=9438, max=75481, avg=34473.51, stdev=4068.93 00:40:21.290 lat (usec): min=9447, max=75517, avg=34529.53, stdev=4062.44 00:40:21.290 clat percentiles (usec): 00:40:21.290 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:40:21.290 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:40:21.290 | 70.00th=[33817], 80.00th=[34341], 90.00th=[39060], 95.00th=[42730], 00:40:21.290 | 99.00th=[43254], 99.50th=[43254], 99.90th=[74974], 99.95th=[74974], 00:40:21.290 | 99.99th=[74974] 00:40:21.290 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1818.95, stdev=157.23, samples=19 00:40:21.290 iops : min= 352, max= 480, avg=454.74, stdev=39.31, samples=19 00:40:21.290 lat (msec) : 10=0.22%, 20=0.13%, 50=99.30%, 100=0.35% 00:40:21.290 cpu : usr=98.32%, sys=1.18%, ctx=36, majf=0, minf=29 00:40:21.290 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.290 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.290 filename2: (groupid=0, jobs=1): err= 0: pid=1413375: Tue Oct 8 18:50:49 2024 00:40:21.290 read: IOPS=458, BW=1833KiB/s (1877kB/s)(17.9MiB/10017msec) 00:40:21.290 slat (nsec): min=11499, max=73719, avg=31955.22, stdev=7470.96 00:40:21.290 clat (usec): min=22787, max=46073, avg=34626.12, stdev=3023.20 00:40:21.290 lat (usec): min=22860, max=46110, avg=34658.07, stdev=3023.26 00:40:21.290 clat percentiles (usec): 00:40:21.290 | 1.00th=[30016], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:40:21.290 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:40:21.290 | 70.00th=[33817], 80.00th=[34866], 90.00th=[38536], 95.00th=[42730], 00:40:21.291 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:40:21.291 | 99.99th=[45876] 00:40:21.291 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1830.40, stdev=144.46, samples=20 00:40:21.291 iops : min= 352, max= 480, avg=457.60, stdev=36.11, samples=20 00:40:21.291 lat (msec) : 50=100.00% 00:40:21.291 cpu : usr=98.14%, sys=1.31%, ctx=90, majf=0, minf=26 00:40:21.291 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:21.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.291 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:21.291 issued rwts: total=4590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:21.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:21.291 00:40:21.291 Run status group 0 (all jobs): 00:40:21.291 READ: bw=42.9MiB/s (45.0MB/s), 1828KiB/s-1837KiB/s (1872kB/s-1881kB/s), io=430MiB (451MB), run=10001-10019msec 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 bdev_null0 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 [2024-10-08 18:50:49.591260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 bdev_null1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:21.291 { 00:40:21.291 "params": { 00:40:21.291 "name": "Nvme$subsystem", 00:40:21.291 "trtype": "$TEST_TRANSPORT", 00:40:21.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:21.291 "adrfam": "ipv4", 00:40:21.291 "trsvcid": "$NVMF_PORT", 00:40:21.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:21.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:21.291 "hdgst": ${hdgst:-false}, 00:40:21.291 "ddgst": ${ddgst:-false} 00:40:21.291 }, 00:40:21.291 "method": "bdev_nvme_attach_controller" 00:40:21.291 } 00:40:21.291 EOF 00:40:21.291 )") 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:21.291 18:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:21.292 { 00:40:21.292 "params": { 00:40:21.292 "name": "Nvme$subsystem", 00:40:21.292 "trtype": "$TEST_TRANSPORT", 00:40:21.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:21.292 "adrfam": "ipv4", 00:40:21.292 "trsvcid": "$NVMF_PORT", 00:40:21.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:21.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:21.292 "hdgst": ${hdgst:-false}, 00:40:21.292 "ddgst": ${ddgst:-false} 00:40:21.292 }, 00:40:21.292 "method": "bdev_nvme_attach_controller" 00:40:21.292 } 00:40:21.292 EOF 00:40:21.292 )") 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:21.292 "params": { 00:40:21.292 "name": "Nvme0", 00:40:21.292 "trtype": "tcp", 00:40:21.292 "traddr": "10.0.0.2", 00:40:21.292 "adrfam": "ipv4", 00:40:21.292 "trsvcid": "4420", 00:40:21.292 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:21.292 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:21.292 "hdgst": false, 00:40:21.292 "ddgst": false 00:40:21.292 }, 00:40:21.292 "method": "bdev_nvme_attach_controller" 00:40:21.292 },{ 00:40:21.292 "params": { 00:40:21.292 "name": "Nvme1", 00:40:21.292 "trtype": "tcp", 00:40:21.292 "traddr": "10.0.0.2", 00:40:21.292 "adrfam": "ipv4", 00:40:21.292 "trsvcid": "4420", 00:40:21.292 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:21.292 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:21.292 "hdgst": false, 00:40:21.292 "ddgst": false 00:40:21.292 }, 00:40:21.292 "method": "bdev_nvme_attach_controller" 00:40:21.292 }' 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:21.292 18:50:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:21.551 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:21.551 ... 00:40:21.551 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:21.551 ... 00:40:21.551 fio-3.35 00:40:21.551 Starting 4 threads 00:40:28.125 00:40:28.125 filename0: (groupid=0, jobs=1): err= 0: pid=1414749: Tue Oct 8 18:50:56 2024 00:40:28.125 read: IOPS=800, BW=6402KiB/s (6556kB/s)(31.3MiB/5006msec) 00:40:28.125 slat (nsec): min=9318, max=98755, avg=34802.00, stdev=10252.47 00:40:28.125 clat (usec): min=2337, max=17232, avg=9854.35, stdev=1199.41 00:40:28.125 lat (usec): min=2373, max=17266, avg=9889.15, stdev=1200.29 00:40:28.125 clat percentiles (usec): 00:40:28.125 | 1.00th=[ 5473], 5.00th=[ 8225], 10.00th=[ 8979], 20.00th=[ 9372], 00:40:28.125 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10028], 00:40:28.125 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10421], 95.00th=[11207], 00:40:28.125 | 99.00th=[14353], 99.50th=[15139], 99.90th=[16188], 99.95th=[16909], 00:40:28.125 | 99.99th=[17171] 00:40:28.125 bw ( KiB/s): min= 6272, max= 6656, per=25.18%, avg=6396.80, stdev=144.26, samples=10 00:40:28.125 iops : min= 784, max= 832, avg=799.60, stdev=18.03, samples=10 00:40:28.125 lat (msec) : 4=0.22%, 10=49.98%, 20=49.80% 00:40:28.125 cpu : usr=95.50%, sys=3.64%, ctx=13, majf=0, minf=10 00:40:28.125 IO depths : 1=1.0%, 2=24.0%, 4=50.8%, 8=24.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:28.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.125 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.125 issued rwts: total=4006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:28.125 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:28.125 filename0: (groupid=0, jobs=1): err= 0: pid=1414751: Tue Oct 8 18:50:56 2024 00:40:28.125 read: IOPS=773, BW=6186KiB/s (6335kB/s)(30.2MiB/5002msec) 00:40:28.125 slat (nsec): min=9267, max=88533, avg=35004.69, stdev=11251.05 00:40:28.125 clat (usec): min=2178, max=19011, avg=10203.36, stdev=1505.72 00:40:28.125 lat (usec): min=2213, max=19039, avg=10238.37, stdev=1505.41 00:40:28.125 clat percentiles (usec): 00:40:28.125 | 1.00th=[ 5538], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9634], 00:40:28.125 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10028], 60.00th=[10159], 00:40:28.125 | 70.00th=[10159], 80.00th=[10290], 90.00th=[11338], 95.00th=[12911], 00:40:28.125 | 99.00th=[16319], 99.50th=[16909], 99.90th=[18482], 99.95th=[18482], 00:40:28.125 | 99.99th=[19006] 00:40:28.125 bw ( KiB/s): min= 6000, max= 6653, per=24.32%, avg=6177.30, stdev=198.25, samples=10 00:40:28.125 iops : min= 750, max= 831, avg=772.10, stdev=24.61, samples=10 00:40:28.125 lat (msec) : 4=0.28%, 10=40.95%, 20=58.76% 00:40:28.125 cpu : usr=95.00%, sys=4.18%, ctx=7, majf=0, minf=9 00:40:28.125 IO depths : 1=0.3%, 2=20.6%, 4=53.1%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:28.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.125 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.125 issued rwts: total=3868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:28.125 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:28.125 filename1: (groupid=0, jobs=1): err= 0: pid=1414752: Tue Oct 8 18:50:56 2024 00:40:28.125 read: IOPS=807, BW=6462KiB/s (6617kB/s)(31.6MiB/5008msec) 00:40:28.125 slat (nsec): min=5156, max=66399, avg=15528.18, stdev=8057.32 00:40:28.125 clat (usec): min=2548, max=18733, avg=9845.84, stdev=1175.80 00:40:28.125 lat (usec): min=2557, max=18742, avg=9861.36, stdev=1175.37 00:40:28.125 clat percentiles (usec): 00:40:28.125 | 1.00th=[ 5473], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9372], 00:40:28.125 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:40:28.125 | 70.00th=[10290], 80.00th=[10290], 90.00th=[10683], 95.00th=[11207], 00:40:28.125 | 99.00th=[12387], 99.50th=[13173], 99.90th=[18744], 99.95th=[18744], 00:40:28.125 | 99.99th=[18744] 00:40:28.125 bw ( KiB/s): min= 6064, max= 6912, per=25.43%, avg=6459.20, stdev=266.57, samples=10 00:40:28.125 iops : min= 758, max= 864, avg=807.40, stdev=33.32, samples=10 00:40:28.125 lat (msec) : 4=0.49%, 10=46.53%, 20=52.98% 00:40:28.125 cpu : usr=97.06%, sys=2.30%, ctx=24, majf=0, minf=0 00:40:28.125 IO depths : 1=0.7%, 2=12.5%, 4=59.5%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:28.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.125 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.125 issued rwts: total=4045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:28.125 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:28.125 filename1: (groupid=0, jobs=1): err= 0: pid=1414753: Tue Oct 8 18:50:56 2024 00:40:28.125 read: IOPS=796, BW=6369KiB/s (6522kB/s)(31.1MiB/5003msec) 00:40:28.125 slat (nsec): min=9112, max=77553, avg=35630.72, stdev=10015.75 00:40:28.125 clat (usec): min=2315, max=18079, avg=9901.78, stdev=1348.71 00:40:28.125 lat (usec): min=2349, max=18091, avg=9937.41, stdev=1349.48 00:40:28.125 clat percentiles (usec): 00:40:28.125 | 1.00th=[ 5145], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9372], 00:40:28.125 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10028], 00:40:28.125 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10421], 95.00th=[11600], 00:40:28.125 | 99.00th=[15533], 99.50th=[17171], 99.90th=[17695], 99.95th=[17957], 00:40:28.125 | 99.99th=[17957] 00:40:28.125 bw ( KiB/s): min= 6144, max= 6784, per=25.04%, avg=6361.60, stdev=199.87, samples=10 00:40:28.125 iops : min= 768, max= 848, avg=795.20, stdev=24.98, samples=10 00:40:28.125 lat (msec) : 4=0.25%, 10=49.33%, 20=50.41% 00:40:28.125 cpu : usr=95.14%, sys=4.02%, ctx=8, majf=0, minf=9 00:40:28.125 IO depths : 1=0.5%, 2=23.5%, 4=51.2%, 8=24.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:28.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.125 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:28.125 issued rwts: total=3983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:28.125 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:28.125 00:40:28.125 Run status group 0 (all jobs): 00:40:28.125 READ: bw=24.8MiB/s (26.0MB/s), 6186KiB/s-6462KiB/s (6335kB/s-6617kB/s), io=124MiB (130MB), run=5002-5008msec 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:28.125 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.126 18:50:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:28.126 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.126 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:28.126 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.126 00:40:28.126 real 0m26.285s 00:40:28.126 user 4m33.971s 00:40:28.126 sys 0m6.943s 00:40:28.126 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:28.126 18:50:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:28.126 ************************************ 00:40:28.126 END TEST fio_dif_rand_params 00:40:28.126 ************************************ 00:40:28.126 18:50:56 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:40:28.126 18:50:56 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:28.126 18:50:56 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:28.126 18:50:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:28.126 ************************************ 00:40:28.126 START TEST fio_dif_digest 00:40:28.126 ************************************ 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:28.126 bdev_null0 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:28.126 [2024-10-08 18:50:56.618021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:28.126 { 00:40:28.126 "params": { 00:40:28.126 "name": "Nvme$subsystem", 00:40:28.126 "trtype": "$TEST_TRANSPORT", 00:40:28.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:28.126 "adrfam": "ipv4", 00:40:28.126 "trsvcid": "$NVMF_PORT", 00:40:28.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:28.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:28.126 "hdgst": ${hdgst:-false}, 00:40:28.126 "ddgst": ${ddgst:-false} 00:40:28.126 }, 00:40:28.126 "method": "bdev_nvme_attach_controller" 00:40:28.126 } 00:40:28.126 EOF 00:40:28.126 )") 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:28.126 "params": { 00:40:28.126 "name": "Nvme0", 00:40:28.126 "trtype": "tcp", 00:40:28.126 "traddr": "10.0.0.2", 00:40:28.126 "adrfam": "ipv4", 00:40:28.126 "trsvcid": "4420", 00:40:28.126 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:28.126 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:28.126 "hdgst": true, 00:40:28.126 "ddgst": true 00:40:28.126 }, 00:40:28.126 "method": "bdev_nvme_attach_controller" 00:40:28.126 }' 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:28.126 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:28.386 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:28.386 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:28.386 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:28.386 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:28.386 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:28.386 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:28.386 18:50:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:28.645 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:28.645 ... 00:40:28.645 fio-3.35 00:40:28.645 Starting 3 threads 00:40:40.911 00:40:40.911 filename0: (groupid=0, jobs=1): err= 0: pid=1415515: Tue Oct 8 18:51:07 2024 00:40:40.911 read: IOPS=93, BW=11.7MiB/s (12.3MB/s)(118MiB/10047msec) 00:40:40.911 slat (nsec): min=5649, max=27881, avg=16883.86, stdev=1520.77 00:40:40.911 clat (usec): min=12593, max=82059, avg=31861.23, stdev=7295.81 00:40:40.911 lat (usec): min=12609, max=82075, avg=31878.12, stdev=7295.84 00:40:40.911 clat percentiles (usec): 00:40:40.911 | 1.00th=[15533], 5.00th=[18744], 10.00th=[21627], 20.00th=[26346], 00:40:40.911 | 30.00th=[29230], 40.00th=[30802], 50.00th=[32113], 60.00th=[33424], 00:40:40.911 | 70.00th=[34866], 80.00th=[36439], 90.00th=[40633], 95.00th=[43779], 00:40:40.911 | 99.00th=[47973], 99.50th=[49021], 99.90th=[82314], 99.95th=[82314], 00:40:40.911 | 99.99th=[82314] 00:40:40.911 bw ( KiB/s): min= 8960, max=15360, per=30.44%, avg=12058.75, stdev=1376.64, samples=20 00:40:40.911 iops : min= 70, max= 120, avg=94.20, stdev=10.76, samples=20 00:40:40.911 lat (msec) : 20=6.36%, 50=93.22%, 100=0.42% 00:40:40.911 cpu : usr=95.01%, sys=4.30%, ctx=154, majf=0, minf=11 00:40:40.911 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:40.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.911 issued rwts: total=944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:40.911 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:40.911 filename0: (groupid=0, jobs=1): err= 0: pid=1415516: Tue Oct 8 18:51:07 2024 00:40:40.911 read: IOPS=108, BW=13.6MiB/s (14.3MB/s)(137MiB/10053msec) 00:40:40.911 slat (nsec): min=4988, max=29976, avg=16151.72, stdev=2013.09 00:40:40.911 clat (usec): min=9451, max=65107, avg=27476.76, stdev=5655.60 00:40:40.911 lat (usec): min=9467, max=65123, avg=27492.91, stdev=5655.73 00:40:40.911 clat percentiles (usec): 00:40:40.911 | 1.00th=[12649], 5.00th=[16188], 10.00th=[19006], 20.00th=[23200], 00:40:40.911 | 30.00th=[25822], 40.00th=[27395], 50.00th=[28443], 60.00th=[29230], 00:40:40.911 | 70.00th=[30540], 80.00th=[31851], 90.00th=[33424], 95.00th=[34866], 00:40:40.911 | 99.00th=[38011], 99.50th=[39060], 99.90th=[53740], 99.95th=[65274], 00:40:40.911 | 99.99th=[65274] 00:40:40.911 bw ( KiB/s): min=11520, max=17664, per=35.32%, avg=13990.40, stdev=1319.21, samples=20 00:40:40.911 iops : min= 90, max= 138, avg=109.30, stdev=10.31, samples=20 00:40:40.911 lat (msec) : 10=0.09%, 20=12.60%, 50=87.12%, 100=0.18% 00:40:40.911 cpu : usr=94.69%, sys=4.71%, ctx=95, majf=0, minf=9 00:40:40.911 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:40.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.911 issued rwts: total=1095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:40.911 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:40.911 filename0: (groupid=0, jobs=1): err= 0: pid=1415517: Tue Oct 8 18:51:07 2024 00:40:40.911 read: IOPS=106, BW=13.3MiB/s (14.0MB/s)(134MiB/10052msec) 00:40:40.911 slat (nsec): min=5708, max=38806, avg=15706.28, stdev=1905.44 00:40:40.911 clat (msec): min=13, max=101, avg=28.07, stdev=10.15 00:40:40.911 lat (msec): min=13, max=101, avg=28.08, stdev=10.15 00:40:40.911 clat percentiles (msec): 00:40:40.911 | 1.00th=[ 15], 5.00th=[ 20], 10.00th=[ 22], 20.00th=[ 24], 00:40:40.911 | 30.00th=[ 25], 40.00th=[ 26], 50.00th=[ 27], 60.00th=[ 28], 00:40:40.911 | 70.00th=[ 28], 80.00th=[ 30], 90.00th=[ 32], 95.00th=[ 62], 00:40:40.911 | 99.00th=[ 71], 99.50th=[ 72], 99.90th=[ 102], 99.95th=[ 102], 00:40:40.911 | 99.99th=[ 102] 00:40:40.911 bw ( KiB/s): min=11008, max=15360, per=34.54%, avg=13683.20, stdev=1274.53, samples=20 00:40:40.911 iops : min= 86, max= 120, avg=106.90, stdev= 9.96, samples=20 00:40:40.911 lat (msec) : 20=5.78%, 50=88.90%, 100=5.13%, 250=0.19% 00:40:40.911 cpu : usr=94.84%, sys=4.67%, ctx=17, majf=0, minf=9 00:40:40.911 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:40.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:40.911 issued rwts: total=1072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:40.911 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:40.911 00:40:40.911 Run status group 0 (all jobs): 00:40:40.911 READ: bw=38.7MiB/s (40.6MB/s), 11.7MiB/s-13.6MiB/s (12.3MB/s-14.3MB/s), io=389MiB (408MB), run=10047-10053msec 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:40.911 00:40:40.911 real 0m11.638s 00:40:40.911 user 0m29.942s 00:40:40.911 sys 0m1.870s 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:40.911 18:51:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:40.911 ************************************ 00:40:40.911 END TEST fio_dif_digest 00:40:40.911 ************************************ 00:40:40.911 18:51:08 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:40:40.911 18:51:08 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:40:40.911 18:51:08 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:40.911 18:51:08 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:40:40.911 18:51:08 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:40.911 18:51:08 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:40:40.911 18:51:08 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:40.911 18:51:08 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:40.911 rmmod nvme_tcp 00:40:40.911 rmmod nvme_fabrics 00:40:40.911 rmmod nvme_keyring 00:40:40.911 18:51:08 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:40.911 18:51:08 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:40:40.911 18:51:08 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:40:40.911 18:51:08 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 1409339 ']' 00:40:40.911 18:51:08 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 1409339 00:40:40.911 18:51:08 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1409339 ']' 00:40:40.911 18:51:08 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1409339 00:40:40.911 18:51:08 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:40:40.911 18:51:08 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:40.911 18:51:08 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409339 00:40:40.911 18:51:08 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:40.911 18:51:08 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:40.911 18:51:08 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409339' 00:40:40.911 killing process with pid 1409339 00:40:40.911 18:51:08 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1409339 00:40:40.911 18:51:08 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1409339 00:40:40.912 18:51:08 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:40:40.912 18:51:08 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:41.848 Waiting for block devices as requested 00:40:42.107 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:40:42.107 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:42.366 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:42.366 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:42.366 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:42.624 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:42.624 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:42.624 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:42.624 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:42.883 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:40:42.883 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:40:42.883 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:40:42.883 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:40:43.141 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:40:43.141 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:40:43.141 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:40:43.141 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:43.399 18:51:11 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:43.399 18:51:11 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:43.399 18:51:11 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:40:43.399 18:51:11 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:40:43.399 18:51:11 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:43.399 18:51:11 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:40:43.399 18:51:11 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:43.399 18:51:11 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:43.399 18:51:11 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:43.399 18:51:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:43.399 18:51:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:45.305 18:51:13 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:45.563 00:40:45.563 real 1m12.823s 00:40:45.563 user 6m35.687s 00:40:45.563 sys 0m20.551s 00:40:45.563 18:51:13 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:45.563 18:51:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:45.563 ************************************ 00:40:45.563 END TEST nvmf_dif 00:40:45.563 ************************************ 00:40:45.563 18:51:13 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:45.564 18:51:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:45.564 18:51:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:45.564 18:51:13 -- common/autotest_common.sh@10 -- # set +x 00:40:45.564 ************************************ 00:40:45.564 START TEST nvmf_abort_qd_sizes 00:40:45.564 ************************************ 00:40:45.564 18:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:45.564 * Looking for test storage... 00:40:45.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:45.564 18:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:45.564 18:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:40:45.564 18:51:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:45.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.564 --rc genhtml_branch_coverage=1 00:40:45.564 --rc genhtml_function_coverage=1 00:40:45.564 --rc genhtml_legend=1 00:40:45.564 --rc geninfo_all_blocks=1 00:40:45.564 --rc geninfo_unexecuted_blocks=1 00:40:45.564 00:40:45.564 ' 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:45.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.564 --rc genhtml_branch_coverage=1 00:40:45.564 --rc genhtml_function_coverage=1 00:40:45.564 --rc genhtml_legend=1 00:40:45.564 --rc geninfo_all_blocks=1 00:40:45.564 --rc geninfo_unexecuted_blocks=1 00:40:45.564 00:40:45.564 ' 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:45.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.564 --rc genhtml_branch_coverage=1 00:40:45.564 --rc genhtml_function_coverage=1 00:40:45.564 --rc genhtml_legend=1 00:40:45.564 --rc geninfo_all_blocks=1 00:40:45.564 --rc geninfo_unexecuted_blocks=1 00:40:45.564 00:40:45.564 ' 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:45.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.564 --rc genhtml_branch_coverage=1 00:40:45.564 --rc genhtml_function_coverage=1 00:40:45.564 --rc genhtml_legend=1 00:40:45.564 --rc geninfo_all_blocks=1 00:40:45.564 --rc geninfo_unexecuted_blocks=1 00:40:45.564 00:40:45.564 ' 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:45.564 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:45.823 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:40:45.823 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:40:45.823 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:45.823 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:45.823 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:45.823 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:45.823 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:45.823 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:40:45.823 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:45.823 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:45.823 18:51:14 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:45.823 18:51:14 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.823 18:51:14 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:45.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:40:45.824 18:51:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:48.355 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:48.355 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:40:48.355 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:48.355 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:48.355 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:48.355 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:48.355 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:40:48.356 Found 0000:84:00.0 (0x8086 - 0x159b) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:40:48.356 Found 0000:84:00.1 (0x8086 - 0x159b) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:40:48.356 Found net devices under 0000:84:00.0: cvl_0_0 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:40:48.356 Found net devices under 0000:84:00.1: cvl_0_1 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:48.356 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:48.616 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:48.617 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:48.617 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:48.617 18:51:16 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:48.617 18:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:48.617 18:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:48.617 18:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:48.617 18:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:48.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:48.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:40:48.617 00:40:48.617 --- 10.0.0.2 ping statistics --- 00:40:48.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.617 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:40:48.617 18:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:48.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:48.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:40:48.617 00:40:48.617 --- 10.0.0.1 ping statistics --- 00:40:48.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.617 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:40:48.617 18:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:48.617 18:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:40:48.617 18:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:40:48.617 18:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:50.518 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:40:50.518 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:40:50.518 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:40:50.518 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:40:50.518 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:40:50.518 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:40:50.518 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:40:50.518 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:40:50.518 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:40:50.518 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:40:50.518 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:40:50.518 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:40:50.518 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:40:50.518 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:40:50.518 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:40:50.518 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:40:51.455 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:40:51.713 18:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:51.713 18:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:51.713 18:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:51.713 18:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:51.713 18:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:51.713 18:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:51.713 18:51:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:40:51.714 18:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:51.714 18:51:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:51.714 18:51:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:51.714 18:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=1421210 00:40:51.714 18:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:40:51.714 18:51:20 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 1421210 00:40:51.714 18:51:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1421210 ']' 00:40:51.714 18:51:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:51.714 18:51:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:51.714 18:51:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:51.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:51.714 18:51:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:51.714 18:51:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:51.714 [2024-10-08 18:51:20.128633] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:40:51.714 [2024-10-08 18:51:20.128758] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:51.714 [2024-10-08 18:51:20.248913] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:51.973 [2024-10-08 18:51:20.472081] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:51.973 [2024-10-08 18:51:20.472200] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:51.973 [2024-10-08 18:51:20.472237] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:51.973 [2024-10-08 18:51:20.472266] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:51.973 [2024-10-08 18:51:20.472292] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:51.973 [2024-10-08 18:51:20.475457] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:51.973 [2024-10-08 18:51:20.475522] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:40:51.973 [2024-10-08 18:51:20.475597] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:40:51.973 [2024-10-08 18:51:20.475601] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:82:00.0 ]] 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:82:00.0 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:53.349 18:51:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:53.349 ************************************ 00:40:53.349 START TEST spdk_target_abort 00:40:53.349 ************************************ 00:40:53.349 18:51:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:40:53.349 18:51:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:40:53.349 18:51:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:40:53.349 18:51:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:53.349 18:51:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:56.633 spdk_targetn1 00:40:56.633 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.633 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:56.633 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.633 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:56.633 [2024-10-08 18:51:24.446295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:56.633 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:56.634 [2024-10-08 18:51:24.478590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:56.634 18:51:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:59.162 Initializing NVMe Controllers 00:40:59.162 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:59.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:59.162 Initialization complete. Launching workers. 00:40:59.162 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10951, failed: 0 00:40:59.162 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1242, failed to submit 9709 00:40:59.162 success 682, unsuccessful 560, failed 0 00:40:59.162 18:51:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:59.162 18:51:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:02.449 Initializing NVMe Controllers 00:41:02.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:02.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:02.449 Initialization complete. Launching workers. 00:41:02.449 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8862, failed: 0 00:41:02.449 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1237, failed to submit 7625 00:41:02.449 success 311, unsuccessful 926, failed 0 00:41:02.449 18:51:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:02.449 18:51:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:05.732 Initializing NVMe Controllers 00:41:05.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:05.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:05.732 Initialization complete. Launching workers. 00:41:05.732 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31231, failed: 0 00:41:05.732 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2682, failed to submit 28549 00:41:05.732 success 525, unsuccessful 2157, failed 0 00:41:05.732 18:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:05.732 18:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.732 18:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:05.732 18:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:05.732 18:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:05.732 18:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:05.732 18:51:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:07.107 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.107 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1421210 00:41:07.107 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1421210 ']' 00:41:07.107 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1421210 00:41:07.107 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:41:07.107 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:07.107 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1421210 00:41:07.107 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:07.107 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:07.107 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1421210' 00:41:07.107 killing process with pid 1421210 00:41:07.107 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1421210 00:41:07.107 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1421210 00:41:07.674 00:41:07.674 real 0m14.352s 00:41:07.674 user 0m57.733s 00:41:07.674 sys 0m2.975s 00:41:07.674 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:07.674 18:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:07.674 ************************************ 00:41:07.674 END TEST spdk_target_abort 00:41:07.674 ************************************ 00:41:07.674 18:51:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:07.674 18:51:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:07.674 18:51:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:07.674 18:51:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:07.674 ************************************ 00:41:07.674 START TEST kernel_target_abort 00:41:07.674 ************************************ 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:07.674 18:51:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:09.050 Waiting for block devices as requested 00:41:09.050 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:41:09.309 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:09.309 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:09.567 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:09.567 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:09.826 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:09.826 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:09.826 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:09.826 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:10.085 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:10.085 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:10.085 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:10.344 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:10.344 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:10.344 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:10.344 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:10.604 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:10.604 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:41:10.604 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:10.604 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:41:10.604 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:41:10.604 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:10.604 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:41:10.604 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:41:10.604 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:41:10.604 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:10.864 No valid GPT data, bailing 00:41:10.864 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:10.864 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:41:10.864 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:41:10.864 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:41:10.864 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:41:10.864 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:10.864 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:10.864 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:41:10.865 00:41:10.865 Discovery Log Number of Records 2, Generation counter 2 00:41:10.865 =====Discovery Log Entry 0====== 00:41:10.865 trtype: tcp 00:41:10.865 adrfam: ipv4 00:41:10.865 subtype: current discovery subsystem 00:41:10.865 treq: not specified, sq flow control disable supported 00:41:10.865 portid: 1 00:41:10.865 trsvcid: 4420 00:41:10.865 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:10.865 traddr: 10.0.0.1 00:41:10.865 eflags: none 00:41:10.865 sectype: none 00:41:10.865 =====Discovery Log Entry 1====== 00:41:10.865 trtype: tcp 00:41:10.865 adrfam: ipv4 00:41:10.865 subtype: nvme subsystem 00:41:10.865 treq: not specified, sq flow control disable supported 00:41:10.865 portid: 1 00:41:10.865 trsvcid: 4420 00:41:10.865 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:10.865 traddr: 10.0.0.1 00:41:10.865 eflags: none 00:41:10.865 sectype: none 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:10.865 18:51:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:14.161 Initializing NVMe Controllers 00:41:14.161 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:14.161 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:14.161 Initialization complete. Launching workers. 00:41:14.161 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 20294, failed: 0 00:41:14.161 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20294, failed to submit 0 00:41:14.161 success 0, unsuccessful 20294, failed 0 00:41:14.161 18:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:14.161 18:51:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:17.456 Initializing NVMe Controllers 00:41:17.456 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:17.456 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:17.456 Initialization complete. Launching workers. 00:41:17.456 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38100, failed: 0 00:41:17.456 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 9490, failed to submit 28610 00:41:17.456 success 0, unsuccessful 9490, failed 0 00:41:17.456 18:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:17.456 18:51:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:20.810 Initializing NVMe Controllers 00:41:20.810 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:20.810 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:20.810 Initialization complete. Launching workers. 00:41:20.810 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42066, failed: 0 00:41:20.810 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 10530, failed to submit 31536 00:41:20.810 success 0, unsuccessful 10530, failed 0 00:41:20.810 18:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:41:20.810 18:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:41:20.810 18:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:41:20.810 18:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:20.810 18:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:20.810 18:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:20.810 18:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:20.810 18:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:41:20.810 18:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:41:20.810 18:51:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:22.189 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:22.189 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:22.189 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:22.189 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:22.189 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:22.189 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:22.189 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:22.189 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:22.189 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:22.189 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:22.189 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:22.189 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:22.449 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:22.449 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:22.449 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:22.449 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:23.393 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:41:23.393 00:41:23.393 real 0m15.764s 00:41:23.393 user 0m6.845s 00:41:23.393 sys 0m4.232s 00:41:23.393 18:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:23.393 18:51:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:23.393 ************************************ 00:41:23.393 END TEST kernel_target_abort 00:41:23.393 ************************************ 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:23.393 rmmod nvme_tcp 00:41:23.393 rmmod nvme_fabrics 00:41:23.393 rmmod nvme_keyring 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 1421210 ']' 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 1421210 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1421210 ']' 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1421210 00:41:23.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1421210) - No such process 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1421210 is not found' 00:41:23.393 Process with pid 1421210 is not found 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:41:23.393 18:51:51 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:24.773 Waiting for block devices as requested 00:41:25.032 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:41:25.032 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:25.292 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:25.292 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:25.552 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:25.552 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:25.552 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:25.553 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:25.812 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:25.812 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:25.812 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:26.071 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:26.072 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:26.072 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:26.072 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:26.332 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:26.332 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:26.592 18:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:26.592 18:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:26.592 18:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:41:26.592 18:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:41:26.592 18:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:26.592 18:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:41:26.592 18:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:26.592 18:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:26.593 18:51:54 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:26.593 18:51:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:26.593 18:51:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:28.501 18:51:56 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:28.501 00:41:28.501 real 0m43.035s 00:41:28.501 user 1m7.924s 00:41:28.501 sys 0m12.423s 00:41:28.501 18:51:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:28.501 18:51:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:28.501 ************************************ 00:41:28.501 END TEST nvmf_abort_qd_sizes 00:41:28.501 ************************************ 00:41:28.501 18:51:56 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:28.501 18:51:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:28.501 18:51:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:28.501 18:51:56 -- common/autotest_common.sh@10 -- # set +x 00:41:28.502 ************************************ 00:41:28.502 START TEST keyring_file 00:41:28.502 ************************************ 00:41:28.502 18:51:57 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:28.762 * Looking for test storage... 00:41:28.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:28.762 18:51:57 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:28.762 18:51:57 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:41:28.762 18:51:57 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:28.762 18:51:57 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:28.762 18:51:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:41:28.762 18:51:57 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:28.762 18:51:57 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:28.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:28.762 --rc genhtml_branch_coverage=1 00:41:28.762 --rc genhtml_function_coverage=1 00:41:28.762 --rc genhtml_legend=1 00:41:28.762 --rc geninfo_all_blocks=1 00:41:28.762 --rc geninfo_unexecuted_blocks=1 00:41:28.762 00:41:28.762 ' 00:41:28.762 18:51:57 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:28.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:28.762 --rc genhtml_branch_coverage=1 00:41:28.762 --rc genhtml_function_coverage=1 00:41:28.762 --rc genhtml_legend=1 00:41:28.762 --rc geninfo_all_blocks=1 00:41:28.762 --rc geninfo_unexecuted_blocks=1 00:41:28.762 00:41:28.762 ' 00:41:28.762 18:51:57 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:28.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:28.762 --rc genhtml_branch_coverage=1 00:41:28.762 --rc genhtml_function_coverage=1 00:41:28.762 --rc genhtml_legend=1 00:41:28.762 --rc geninfo_all_blocks=1 00:41:28.762 --rc geninfo_unexecuted_blocks=1 00:41:28.762 00:41:28.762 ' 00:41:28.762 18:51:57 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:28.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:28.762 --rc genhtml_branch_coverage=1 00:41:28.762 --rc genhtml_function_coverage=1 00:41:28.762 --rc genhtml_legend=1 00:41:28.762 --rc geninfo_all_blocks=1 00:41:28.762 --rc geninfo_unexecuted_blocks=1 00:41:28.762 00:41:28.762 ' 00:41:28.762 18:51:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:28.762 18:51:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:28.762 18:51:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:41:28.762 18:51:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:28.762 18:51:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:28.762 18:51:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:28.762 18:51:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:28.762 18:51:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:28.762 18:51:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:28.762 18:51:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:28.762 18:51:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:28.762 18:51:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:28.762 18:51:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:28.763 18:51:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:41:28.763 18:51:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:28.763 18:51:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:28.763 18:51:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:28.763 18:51:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:28.763 18:51:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:28.763 18:51:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:28.763 18:51:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:41:28.763 18:51:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:28.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:28.763 18:51:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:28.763 18:51:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:28.763 18:51:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:28.763 18:51:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:28.763 18:51:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:41:28.763 18:51:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:41:28.763 18:51:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:41:28.763 18:51:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:28.763 18:51:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:28.763 18:51:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:28.763 18:51:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:28.763 18:51:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:28.763 18:51:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:29.021 18:51:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.B04wkXRmsk 00:41:29.021 18:51:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:29.021 18:51:57 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:29.021 18:51:57 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:41:29.021 18:51:57 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:41:29.021 18:51:57 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:41:29.021 18:51:57 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:41:29.021 18:51:57 keyring_file -- nvmf/common.sh@731 -- # python - 00:41:29.021 18:51:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.B04wkXRmsk 00:41:29.021 18:51:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.B04wkXRmsk 00:41:29.021 18:51:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.B04wkXRmsk 00:41:29.021 18:51:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:41:29.021 18:51:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:29.021 18:51:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:41:29.021 18:51:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:29.021 18:51:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:29.021 18:51:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:29.021 18:51:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.poeLf9PonO 00:41:29.021 18:51:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:29.021 18:51:57 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:29.021 18:51:57 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:41:29.022 18:51:57 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:41:29.022 18:51:57 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:41:29.022 18:51:57 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:41:29.022 18:51:57 keyring_file -- nvmf/common.sh@731 -- # python - 00:41:29.022 18:51:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.poeLf9PonO 00:41:29.022 18:51:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.poeLf9PonO 00:41:29.022 18:51:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.poeLf9PonO 00:41:29.022 18:51:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=1427252 00:41:29.022 18:51:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:29.022 18:51:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1427252 00:41:29.022 18:51:57 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1427252 ']' 00:41:29.022 18:51:57 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:29.022 18:51:57 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:29.022 18:51:57 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:29.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:29.022 18:51:57 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:29.022 18:51:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:29.022 [2024-10-08 18:51:57.530729] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:41:29.022 [2024-10-08 18:51:57.530852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427252 ] 00:41:29.280 [2024-10-08 18:51:57.613510] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:29.280 [2024-10-08 18:51:57.761328] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:41:29.853 18:51:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:29.853 [2024-10-08 18:51:58.208760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:29.853 null0 00:41:29.853 [2024-10-08 18:51:58.241369] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:29.853 [2024-10-08 18:51:58.242099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:29.853 18:51:58 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:29.853 [2024-10-08 18:51:58.269417] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:41:29.853 request: 00:41:29.853 { 00:41:29.853 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:41:29.853 "secure_channel": false, 00:41:29.853 "listen_address": { 00:41:29.853 "trtype": "tcp", 00:41:29.853 "traddr": "127.0.0.1", 00:41:29.853 "trsvcid": "4420" 00:41:29.853 }, 00:41:29.853 "method": "nvmf_subsystem_add_listener", 00:41:29.853 "req_id": 1 00:41:29.853 } 00:41:29.853 Got JSON-RPC error response 00:41:29.853 response: 00:41:29.853 { 00:41:29.853 "code": -32602, 00:41:29.853 "message": "Invalid parameters" 00:41:29.853 } 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:29.853 18:51:58 keyring_file -- keyring/file.sh@47 -- # bperfpid=1427384 00:41:29.853 18:51:58 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:41:29.853 18:51:58 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1427384 /var/tmp/bperf.sock 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1427384 ']' 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:29.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:29.853 18:51:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:29.853 [2024-10-08 18:51:58.335726] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:41:29.853 [2024-10-08 18:51:58.335822] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427384 ] 00:41:30.114 [2024-10-08 18:51:58.454408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:30.375 [2024-10-08 18:51:58.682829] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:41:30.636 18:51:58 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:30.636 18:51:58 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:41:30.636 18:51:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.B04wkXRmsk 00:41:30.636 18:51:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.B04wkXRmsk 00:41:31.206 18:51:59 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.poeLf9PonO 00:41:31.206 18:51:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.poeLf9PonO 00:41:31.775 18:52:00 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:41:31.775 18:52:00 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:41:31.775 18:52:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:31.775 18:52:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:31.775 18:52:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:32.345 18:52:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.B04wkXRmsk == \/\t\m\p\/\t\m\p\.\B\0\4\w\k\X\R\m\s\k ]] 00:41:32.345 18:52:00 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:41:32.345 18:52:00 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:41:32.345 18:52:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:32.345 18:52:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:32.345 18:52:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:33.285 18:52:01 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.poeLf9PonO == \/\t\m\p\/\t\m\p\.\p\o\e\L\f\9\P\o\n\O ]] 00:41:33.285 18:52:01 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:41:33.285 18:52:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:33.285 18:52:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:33.285 18:52:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:33.285 18:52:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:33.285 18:52:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:33.854 18:52:02 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:41:33.854 18:52:02 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:41:33.854 18:52:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:33.854 18:52:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:33.854 18:52:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:33.854 18:52:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:33.854 18:52:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:34.113 18:52:02 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:41:34.113 18:52:02 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:34.113 18:52:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:34.373 [2024-10-08 18:52:02.812843] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:34.373 nvme0n1 00:41:34.633 18:52:02 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:41:34.633 18:52:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:34.633 18:52:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:34.633 18:52:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:34.633 18:52:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:34.633 18:52:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:34.893 18:52:03 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:41:34.893 18:52:03 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:41:34.893 18:52:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:34.894 18:52:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:34.894 18:52:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:34.894 18:52:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:34.894 18:52:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:35.463 18:52:03 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:41:35.463 18:52:03 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:35.463 Running I/O for 1 seconds... 00:41:36.411 3870.00 IOPS, 15.12 MiB/s 00:41:36.411 Latency(us) 00:41:36.411 [2024-10-08T16:52:04.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:36.411 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:41:36.411 nvme0n1 : 1.01 3962.21 15.48 0.00 0.00 32175.71 8932.31 48545.19 00:41:36.411 [2024-10-08T16:52:04.948Z] =================================================================================================================== 00:41:36.411 [2024-10-08T16:52:04.948Z] Total : 3962.21 15.48 0.00 0.00 32175.71 8932.31 48545.19 00:41:36.411 { 00:41:36.411 "results": [ 00:41:36.411 { 00:41:36.411 "job": "nvme0n1", 00:41:36.411 "core_mask": "0x2", 00:41:36.411 "workload": "randrw", 00:41:36.411 "percentage": 50, 00:41:36.411 "status": "finished", 00:41:36.411 "queue_depth": 128, 00:41:36.411 "io_size": 4096, 00:41:36.411 "runtime": 1.009034, 00:41:36.411 "iops": 3962.20543609036, 00:41:36.411 "mibps": 15.477364984727968, 00:41:36.411 "io_failed": 0, 00:41:36.411 "io_timeout": 0, 00:41:36.411 "avg_latency_us": 32175.707456691307, 00:41:36.411 "min_latency_us": 8932.314074074075, 00:41:36.411 "max_latency_us": 48545.18518518518 00:41:36.411 } 00:41:36.411 ], 00:41:36.411 "core_count": 1 00:41:36.411 } 00:41:36.411 18:52:04 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:36.411 18:52:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:36.978 18:52:05 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:41:36.978 18:52:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:36.978 18:52:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:36.978 18:52:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:36.978 18:52:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:36.978 18:52:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:37.237 18:52:05 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:41:37.237 18:52:05 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:41:37.237 18:52:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:37.237 18:52:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:37.237 18:52:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:37.237 18:52:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:37.237 18:52:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:37.805 18:52:06 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:41:37.806 18:52:06 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:37.806 18:52:06 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:41:37.806 18:52:06 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:37.806 18:52:06 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:41:37.806 18:52:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:37.806 18:52:06 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:41:37.806 18:52:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:37.806 18:52:06 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:37.806 18:52:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:38.066 [2024-10-08 18:52:06.562030] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:38.066 [2024-10-08 18:52:06.562243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118c9f0 (107): Transport endpoint is not connected 00:41:38.066 [2024-10-08 18:52:06.563221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118c9f0 (9): Bad file descriptor 00:41:38.066 [2024-10-08 18:52:06.564214] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:38.066 [2024-10-08 18:52:06.564264] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:38.066 [2024-10-08 18:52:06.564299] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:41:38.066 [2024-10-08 18:52:06.564336] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:38.066 request: 00:41:38.066 { 00:41:38.066 "name": "nvme0", 00:41:38.066 "trtype": "tcp", 00:41:38.066 "traddr": "127.0.0.1", 00:41:38.066 "adrfam": "ipv4", 00:41:38.066 "trsvcid": "4420", 00:41:38.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:38.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:38.066 "prchk_reftag": false, 00:41:38.066 "prchk_guard": false, 00:41:38.066 "hdgst": false, 00:41:38.066 "ddgst": false, 00:41:38.066 "psk": "key1", 00:41:38.066 "allow_unrecognized_csi": false, 00:41:38.066 "method": "bdev_nvme_attach_controller", 00:41:38.066 "req_id": 1 00:41:38.066 } 00:41:38.066 Got JSON-RPC error response 00:41:38.066 response: 00:41:38.066 { 00:41:38.066 "code": -5, 00:41:38.066 "message": "Input/output error" 00:41:38.066 } 00:41:38.066 18:52:06 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:41:38.066 18:52:06 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:38.066 18:52:06 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:38.066 18:52:06 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:38.066 18:52:06 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:41:38.066 18:52:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:38.066 18:52:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:38.066 18:52:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:38.066 18:52:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:38.066 18:52:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:39.006 18:52:07 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:41:39.006 18:52:07 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:41:39.006 18:52:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:39.006 18:52:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:39.006 18:52:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:39.006 18:52:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:39.006 18:52:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:39.577 18:52:07 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:41:39.577 18:52:07 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:41:39.577 18:52:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:39.837 18:52:08 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:41:39.837 18:52:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:41:40.499 18:52:08 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:41:40.499 18:52:08 keyring_file -- keyring/file.sh@78 -- # jq length 00:41:40.499 18:52:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:40.759 18:52:09 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:41:40.759 18:52:09 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.B04wkXRmsk 00:41:41.019 18:52:09 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.B04wkXRmsk 00:41:41.019 18:52:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:41:41.019 18:52:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.B04wkXRmsk 00:41:41.019 18:52:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:41:41.019 18:52:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:41.019 18:52:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:41:41.019 18:52:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:41.019 18:52:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.B04wkXRmsk 00:41:41.019 18:52:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.B04wkXRmsk 00:41:41.279 [2024-10-08 18:52:09.614953] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.B04wkXRmsk': 0100660 00:41:41.279 [2024-10-08 18:52:09.615036] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:41:41.279 request: 00:41:41.279 { 00:41:41.279 "name": "key0", 00:41:41.279 "path": "/tmp/tmp.B04wkXRmsk", 00:41:41.279 "method": "keyring_file_add_key", 00:41:41.279 "req_id": 1 00:41:41.279 } 00:41:41.279 Got JSON-RPC error response 00:41:41.279 response: 00:41:41.279 { 00:41:41.279 "code": -1, 00:41:41.279 "message": "Operation not permitted" 00:41:41.279 } 00:41:41.279 18:52:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:41:41.279 18:52:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:41.279 18:52:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:41.279 18:52:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:41.279 18:52:09 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.B04wkXRmsk 00:41:41.279 18:52:09 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.B04wkXRmsk 00:41:41.279 18:52:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.B04wkXRmsk 00:41:41.540 18:52:10 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.B04wkXRmsk 00:41:41.540 18:52:10 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:41:41.540 18:52:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:41.540 18:52:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:41.540 18:52:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:41.540 18:52:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:41.540 18:52:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:42.107 18:52:10 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:41:42.107 18:52:10 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:42.107 18:52:10 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:41:42.107 18:52:10 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:42.107 18:52:10 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:41:42.107 18:52:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:42.107 18:52:10 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:41:42.107 18:52:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:42.107 18:52:10 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:42.107 18:52:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:42.673 [2024-10-08 18:52:11.026767] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.B04wkXRmsk': No such file or directory 00:41:42.673 [2024-10-08 18:52:11.026823] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:41:42.673 [2024-10-08 18:52:11.026851] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:41:42.673 [2024-10-08 18:52:11.026866] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:41:42.673 [2024-10-08 18:52:11.026882] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:42.673 [2024-10-08 18:52:11.026896] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:41:42.673 request: 00:41:42.673 { 00:41:42.673 "name": "nvme0", 00:41:42.673 "trtype": "tcp", 00:41:42.673 "traddr": "127.0.0.1", 00:41:42.673 "adrfam": "ipv4", 00:41:42.673 "trsvcid": "4420", 00:41:42.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:42.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:42.673 "prchk_reftag": false, 00:41:42.673 "prchk_guard": false, 00:41:42.673 "hdgst": false, 00:41:42.673 "ddgst": false, 00:41:42.673 "psk": "key0", 00:41:42.673 "allow_unrecognized_csi": false, 00:41:42.673 "method": "bdev_nvme_attach_controller", 00:41:42.673 "req_id": 1 00:41:42.673 } 00:41:42.673 Got JSON-RPC error response 00:41:42.673 response: 00:41:42.673 { 00:41:42.673 "code": -19, 00:41:42.673 "message": "No such device" 00:41:42.673 } 00:41:42.673 18:52:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:41:42.673 18:52:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:42.673 18:52:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:42.673 18:52:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:42.673 18:52:11 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:41:42.673 18:52:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:42.931 18:52:11 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:42.931 18:52:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:42.931 18:52:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:42.931 18:52:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:42.931 18:52:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:42.931 18:52:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:42.931 18:52:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iihJRcgvDq 00:41:42.931 18:52:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:42.931 18:52:11 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:42.931 18:52:11 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:41:42.931 18:52:11 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:41:42.931 18:52:11 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:41:42.931 18:52:11 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:41:42.931 18:52:11 keyring_file -- nvmf/common.sh@731 -- # python - 00:41:43.191 18:52:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iihJRcgvDq 00:41:43.191 18:52:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iihJRcgvDq 00:41:43.191 18:52:11 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.iihJRcgvDq 00:41:43.191 18:52:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iihJRcgvDq 00:41:43.191 18:52:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iihJRcgvDq 00:41:43.762 18:52:12 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:43.762 18:52:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:44.332 nvme0n1 00:41:44.332 18:52:12 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:41:44.332 18:52:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:44.332 18:52:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:44.332 18:52:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:44.332 18:52:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:44.332 18:52:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:44.591 18:52:13 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:41:44.591 18:52:13 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:41:44.591 18:52:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:45.157 18:52:13 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:41:45.157 18:52:13 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:41:45.157 18:52:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:45.157 18:52:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:45.157 18:52:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:45.725 18:52:14 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:41:45.725 18:52:14 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:41:45.725 18:52:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:45.725 18:52:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:45.725 18:52:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:45.725 18:52:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:45.725 18:52:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:46.293 18:52:14 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:41:46.293 18:52:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:46.293 18:52:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:47.233 18:52:15 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:41:47.233 18:52:15 keyring_file -- keyring/file.sh@105 -- # jq length 00:41:47.233 18:52:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:47.493 18:52:15 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:41:47.493 18:52:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iihJRcgvDq 00:41:47.493 18:52:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iihJRcgvDq 00:41:48.062 18:52:16 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.poeLf9PonO 00:41:48.062 18:52:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.poeLf9PonO 00:41:48.632 18:52:17 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:48.632 18:52:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:49.603 nvme0n1 00:41:49.603 18:52:17 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:41:49.603 18:52:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:41:49.889 18:52:18 keyring_file -- keyring/file.sh@113 -- # config='{ 00:41:49.889 "subsystems": [ 00:41:49.889 { 00:41:49.889 "subsystem": "keyring", 00:41:49.889 "config": [ 00:41:49.889 { 00:41:49.889 "method": "keyring_file_add_key", 00:41:49.889 "params": { 00:41:49.889 "name": "key0", 00:41:49.889 "path": "/tmp/tmp.iihJRcgvDq" 00:41:49.889 } 00:41:49.890 }, 00:41:49.890 { 00:41:49.890 "method": "keyring_file_add_key", 00:41:49.890 "params": { 00:41:49.890 "name": "key1", 00:41:49.890 "path": "/tmp/tmp.poeLf9PonO" 00:41:49.890 } 00:41:49.890 } 00:41:49.890 ] 00:41:49.890 }, 00:41:49.890 { 00:41:49.890 "subsystem": "iobuf", 00:41:49.890 "config": [ 00:41:49.890 { 00:41:49.890 "method": "iobuf_set_options", 00:41:49.890 "params": { 00:41:49.890 "small_pool_count": 8192, 00:41:49.890 "large_pool_count": 1024, 00:41:49.890 "small_bufsize": 8192, 00:41:49.890 "large_bufsize": 135168 00:41:49.890 } 00:41:49.890 } 00:41:49.890 ] 00:41:49.890 }, 00:41:49.890 { 00:41:49.890 "subsystem": "sock", 00:41:49.890 "config": [ 00:41:49.890 { 00:41:49.890 "method": "sock_set_default_impl", 00:41:49.890 "params": { 00:41:49.890 "impl_name": "posix" 00:41:49.890 } 00:41:49.890 }, 00:41:49.890 { 00:41:49.890 "method": "sock_impl_set_options", 00:41:49.890 "params": { 00:41:49.890 "impl_name": "ssl", 00:41:49.890 "recv_buf_size": 4096, 00:41:49.890 "send_buf_size": 4096, 00:41:49.890 "enable_recv_pipe": true, 00:41:49.890 "enable_quickack": false, 00:41:49.890 "enable_placement_id": 0, 00:41:49.890 "enable_zerocopy_send_server": true, 00:41:49.890 "enable_zerocopy_send_client": false, 00:41:49.890 "zerocopy_threshold": 0, 00:41:49.890 "tls_version": 0, 00:41:49.890 "enable_ktls": false 00:41:49.890 } 00:41:49.890 }, 00:41:49.890 { 00:41:49.890 "method": "sock_impl_set_options", 00:41:49.890 "params": { 00:41:49.890 "impl_name": "posix", 00:41:49.890 "recv_buf_size": 2097152, 00:41:49.890 "send_buf_size": 2097152, 00:41:49.890 "enable_recv_pipe": true, 00:41:49.890 "enable_quickack": false, 00:41:49.890 "enable_placement_id": 0, 00:41:49.890 "enable_zerocopy_send_server": true, 00:41:49.890 "enable_zerocopy_send_client": false, 00:41:49.890 "zerocopy_threshold": 0, 00:41:49.890 "tls_version": 0, 00:41:49.890 "enable_ktls": false 00:41:49.890 } 00:41:49.890 } 00:41:49.890 ] 00:41:49.890 }, 00:41:49.890 { 00:41:49.890 "subsystem": "vmd", 00:41:49.890 "config": [] 00:41:49.890 }, 00:41:49.890 { 00:41:49.890 "subsystem": "accel", 00:41:49.890 "config": [ 00:41:49.890 { 00:41:49.890 "method": "accel_set_options", 00:41:49.890 "params": { 00:41:49.890 "small_cache_size": 128, 00:41:49.890 "large_cache_size": 16, 00:41:49.890 "task_count": 2048, 00:41:49.890 "sequence_count": 2048, 00:41:49.890 "buf_count": 2048 00:41:49.890 } 00:41:49.890 } 00:41:49.890 ] 00:41:49.890 }, 00:41:49.890 { 00:41:49.890 "subsystem": "bdev", 00:41:49.890 "config": [ 00:41:49.890 { 00:41:49.890 "method": "bdev_set_options", 00:41:49.890 "params": { 00:41:49.890 "bdev_io_pool_size": 65535, 00:41:49.890 "bdev_io_cache_size": 256, 00:41:49.890 "bdev_auto_examine": true, 00:41:49.890 "iobuf_small_cache_size": 128, 00:41:49.890 "iobuf_large_cache_size": 16 00:41:49.890 } 00:41:49.890 }, 00:41:49.890 { 00:41:49.890 "method": "bdev_raid_set_options", 00:41:49.890 "params": { 00:41:49.890 "process_window_size_kb": 1024, 00:41:49.890 "process_max_bandwidth_mb_sec": 0 00:41:49.890 } 00:41:49.890 }, 00:41:49.890 { 00:41:49.890 "method": "bdev_iscsi_set_options", 00:41:49.890 "params": { 00:41:49.890 "timeout_sec": 30 00:41:49.890 } 00:41:49.890 }, 00:41:49.890 { 00:41:49.890 "method": "bdev_nvme_set_options", 00:41:49.890 "params": { 00:41:49.890 "action_on_timeout": "none", 00:41:49.890 "timeout_us": 0, 00:41:49.890 "timeout_admin_us": 0, 00:41:49.890 "keep_alive_timeout_ms": 10000, 00:41:49.890 "arbitration_burst": 0, 00:41:49.890 "low_priority_weight": 0, 00:41:49.890 "medium_priority_weight": 0, 00:41:49.890 "high_priority_weight": 0, 00:41:49.890 "nvme_adminq_poll_period_us": 10000, 00:41:49.890 "nvme_ioq_poll_period_us": 0, 00:41:49.890 "io_queue_requests": 512, 00:41:49.890 "delay_cmd_submit": true, 00:41:49.890 "transport_retry_count": 4, 00:41:49.890 "bdev_retry_count": 3, 00:41:49.890 "transport_ack_timeout": 0, 00:41:49.890 "ctrlr_loss_timeout_sec": 0, 00:41:49.890 "reconnect_delay_sec": 0, 00:41:49.890 "fast_io_fail_timeout_sec": 0, 00:41:49.890 "disable_auto_failback": false, 00:41:49.890 "generate_uuids": false, 00:41:49.890 "transport_tos": 0, 00:41:49.890 "nvme_error_stat": false, 00:41:49.890 "rdma_srq_size": 0, 00:41:49.890 "io_path_stat": false, 00:41:49.890 "allow_accel_sequence": false, 00:41:49.890 "rdma_max_cq_size": 0, 00:41:49.890 "rdma_cm_event_timeout_ms": 0, 00:41:49.890 "dhchap_digests": [ 00:41:49.890 "sha256", 00:41:49.890 "sha384", 00:41:49.890 "sha512" 00:41:49.890 ], 00:41:49.891 "dhchap_dhgroups": [ 00:41:49.891 "null", 00:41:49.891 "ffdhe2048", 00:41:49.891 "ffdhe3072", 00:41:49.891 "ffdhe4096", 00:41:49.891 "ffdhe6144", 00:41:49.891 "ffdhe8192" 00:41:49.891 ] 00:41:49.891 } 00:41:49.891 }, 00:41:49.891 { 00:41:49.891 "method": "bdev_nvme_attach_controller", 00:41:49.891 "params": { 00:41:49.891 "name": "nvme0", 00:41:49.891 "trtype": "TCP", 00:41:49.891 "adrfam": "IPv4", 00:41:49.891 "traddr": "127.0.0.1", 00:41:49.891 "trsvcid": "4420", 00:41:49.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:49.891 "prchk_reftag": false, 00:41:49.891 "prchk_guard": false, 00:41:49.891 "ctrlr_loss_timeout_sec": 0, 00:41:49.891 "reconnect_delay_sec": 0, 00:41:49.891 "fast_io_fail_timeout_sec": 0, 00:41:49.891 "psk": "key0", 00:41:49.891 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:49.891 "hdgst": false, 00:41:49.891 "ddgst": false, 00:41:49.891 "multipath": "multipath" 00:41:49.891 } 00:41:49.891 }, 00:41:49.891 { 00:41:49.891 "method": "bdev_nvme_set_hotplug", 00:41:49.891 "params": { 00:41:49.891 "period_us": 100000, 00:41:49.891 "enable": false 00:41:49.891 } 00:41:49.891 }, 00:41:49.891 { 00:41:49.891 "method": "bdev_wait_for_examine" 00:41:49.891 } 00:41:49.891 ] 00:41:49.891 }, 00:41:49.891 { 00:41:49.891 "subsystem": "nbd", 00:41:49.891 "config": [] 00:41:49.891 } 00:41:49.891 ] 00:41:49.891 }' 00:41:49.891 18:52:18 keyring_file -- keyring/file.sh@115 -- # killprocess 1427384 00:41:49.891 18:52:18 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1427384 ']' 00:41:49.891 18:52:18 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1427384 00:41:49.891 18:52:18 keyring_file -- common/autotest_common.sh@955 -- # uname 00:41:49.891 18:52:18 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:49.891 18:52:18 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1427384 00:41:49.891 18:52:18 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:49.891 18:52:18 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:49.891 18:52:18 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1427384' 00:41:49.891 killing process with pid 1427384 00:41:49.891 18:52:18 keyring_file -- common/autotest_common.sh@969 -- # kill 1427384 00:41:49.891 Received shutdown signal, test time was about 1.000000 seconds 00:41:49.891 00:41:49.891 Latency(us) 00:41:49.891 [2024-10-08T16:52:18.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:49.891 [2024-10-08T16:52:18.428Z] =================================================================================================================== 00:41:49.891 [2024-10-08T16:52:18.428Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:49.891 18:52:18 keyring_file -- common/autotest_common.sh@974 -- # wait 1427384 00:41:50.460 18:52:18 keyring_file -- keyring/file.sh@118 -- # bperfpid=1429786 00:41:50.460 18:52:18 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1429786 /var/tmp/bperf.sock 00:41:50.460 18:52:18 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1429786 ']' 00:41:50.461 18:52:18 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:41:50.461 18:52:18 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:50.461 18:52:18 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:50.461 18:52:18 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:41:50.461 "subsystems": [ 00:41:50.461 { 00:41:50.461 "subsystem": "keyring", 00:41:50.461 "config": [ 00:41:50.461 { 00:41:50.461 "method": "keyring_file_add_key", 00:41:50.461 "params": { 00:41:50.461 "name": "key0", 00:41:50.461 "path": "/tmp/tmp.iihJRcgvDq" 00:41:50.461 } 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "method": "keyring_file_add_key", 00:41:50.461 "params": { 00:41:50.461 "name": "key1", 00:41:50.461 "path": "/tmp/tmp.poeLf9PonO" 00:41:50.461 } 00:41:50.461 } 00:41:50.461 ] 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "subsystem": "iobuf", 00:41:50.461 "config": [ 00:41:50.461 { 00:41:50.461 "method": "iobuf_set_options", 00:41:50.461 "params": { 00:41:50.461 "small_pool_count": 8192, 00:41:50.461 "large_pool_count": 1024, 00:41:50.461 "small_bufsize": 8192, 00:41:50.461 "large_bufsize": 135168 00:41:50.461 } 00:41:50.461 } 00:41:50.461 ] 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "subsystem": "sock", 00:41:50.461 "config": [ 00:41:50.461 { 00:41:50.461 "method": "sock_set_default_impl", 00:41:50.461 "params": { 00:41:50.461 "impl_name": "posix" 00:41:50.461 } 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "method": "sock_impl_set_options", 00:41:50.461 "params": { 00:41:50.461 "impl_name": "ssl", 00:41:50.461 "recv_buf_size": 4096, 00:41:50.461 "send_buf_size": 4096, 00:41:50.461 "enable_recv_pipe": true, 00:41:50.461 "enable_quickack": false, 00:41:50.461 "enable_placement_id": 0, 00:41:50.461 "enable_zerocopy_send_server": true, 00:41:50.461 "enable_zerocopy_send_client": false, 00:41:50.461 "zerocopy_threshold": 0, 00:41:50.461 "tls_version": 0, 00:41:50.461 "enable_ktls": false 00:41:50.461 } 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "method": "sock_impl_set_options", 00:41:50.461 "params": { 00:41:50.461 "impl_name": "posix", 00:41:50.461 "recv_buf_size": 2097152, 00:41:50.461 "send_buf_size": 2097152, 00:41:50.461 "enable_recv_pipe": true, 00:41:50.461 "enable_quickack": false, 00:41:50.461 "enable_placement_id": 0, 00:41:50.461 "enable_zerocopy_send_server": true, 00:41:50.461 "enable_zerocopy_send_client": false, 00:41:50.461 "zerocopy_threshold": 0, 00:41:50.461 "tls_version": 0, 00:41:50.461 "enable_ktls": false 00:41:50.461 } 00:41:50.461 } 00:41:50.461 ] 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "subsystem": "vmd", 00:41:50.461 "config": [] 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "subsystem": "accel", 00:41:50.461 "config": [ 00:41:50.461 { 00:41:50.461 "method": "accel_set_options", 00:41:50.461 "params": { 00:41:50.461 "small_cache_size": 128, 00:41:50.461 "large_cache_size": 16, 00:41:50.461 "task_count": 2048, 00:41:50.461 "sequence_count": 2048, 00:41:50.461 "buf_count": 2048 00:41:50.461 } 00:41:50.461 } 00:41:50.461 ] 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "subsystem": "bdev", 00:41:50.461 "config": [ 00:41:50.461 { 00:41:50.461 "method": "bdev_set_options", 00:41:50.461 "params": { 00:41:50.461 "bdev_io_pool_size": 65535, 00:41:50.461 "bdev_io_cache_size": 256, 00:41:50.461 "bdev_auto_examine": true, 00:41:50.461 "iobuf_small_cache_size": 128, 00:41:50.461 "iobuf_large_cache_size": 16 00:41:50.461 } 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "method": "bdev_raid_set_options", 00:41:50.461 "params": { 00:41:50.461 "process_window_size_kb": 1024, 00:41:50.461 "process_max_bandwidth_mb_sec": 0 00:41:50.461 } 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "method": "bdev_iscsi_set_options", 00:41:50.461 "params": { 00:41:50.461 "timeout_sec": 30 00:41:50.461 } 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "method": "bdev_nvme_set_options", 00:41:50.461 "params": { 00:41:50.461 "action_on_timeout": "none", 00:41:50.461 "timeout_us": 0, 00:41:50.461 "timeout_admin_us": 0, 00:41:50.461 "keep_alive_timeout_ms": 10000, 00:41:50.461 "arbitration_burst": 0, 00:41:50.461 "low_priority_weight": 0, 00:41:50.461 "medium_priority_weight": 0, 00:41:50.461 "high_priority_weight": 0, 00:41:50.461 "nvme_adminq_poll_period_us": 10000, 00:41:50.461 "nvme_ioq_poll_period_us": 0, 00:41:50.461 "io_queue_requests": 512, 00:41:50.461 "delay_cmd_submit": true, 00:41:50.461 "transport_retry_count": 4, 00:41:50.461 "bdev_retry_count": 3, 00:41:50.461 "transport_ack_timeout": 0, 00:41:50.461 "ctrlr_loss_timeout_sec": 0, 00:41:50.461 "reconnect_delay_sec": 0, 00:41:50.461 "fast_io_fail_timeout_sec": 0, 00:41:50.461 "disable_auto_failback": false, 00:41:50.461 "generate_uuids": false, 00:41:50.461 "transport_tos": 0, 00:41:50.461 "nvme_error_stat": false, 00:41:50.461 "rdma_srq_size": 0, 00:41:50.461 "io_path_stat": false, 00:41:50.461 "allow_accel_sequence": false, 00:41:50.461 "rdma_max_cq_size": 0, 00:41:50.461 "rdma_cm_event_timeout_ms": 0, 00:41:50.461 "dhchap_digests": [ 00:41:50.461 "sha256", 00:41:50.461 "sha384", 00:41:50.461 "sha512" 00:41:50.461 ], 00:41:50.461 "dhchap_dhgroups": [ 00:41:50.461 "null", 00:41:50.461 "ffdhe2048", 00:41:50.461 "ffdhe3072", 00:41:50.461 "ffdhe4096", 00:41:50.461 "ffdhe6144", 00:41:50.461 "ffdhe8192" 00:41:50.461 ] 00:41:50.461 } 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "method": "bdev_nvme_attach_controller", 00:41:50.461 "params": { 00:41:50.461 "name": "nvme0", 00:41:50.461 "trtype": "TCP", 00:41:50.461 "adrfam": "IPv4", 00:41:50.461 "traddr": "127.0.0.1", 00:41:50.461 "trsvcid": "4420", 00:41:50.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:50.461 "prchk_reftag": false, 00:41:50.461 "prchk_guard": false, 00:41:50.461 "ctrlr_loss_timeout_sec": 0, 00:41:50.461 "reconnect_delay_sec": 0, 00:41:50.461 "fast_io_fail_timeout_sec": 0, 00:41:50.461 "psk": "key0", 00:41:50.461 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:50.461 "hdgst": false, 00:41:50.461 "ddgst": false, 00:41:50.461 "multipath": "multipath" 00:41:50.461 } 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "method": "bdev_nvme_set_hotplug", 00:41:50.461 "params": { 00:41:50.461 "period_us": 100000, 00:41:50.461 "enable": false 00:41:50.461 } 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "method": "bdev_wait_for_examine" 00:41:50.461 } 00:41:50.461 ] 00:41:50.461 }, 00:41:50.461 { 00:41:50.461 "subsystem": "nbd", 00:41:50.461 "config": [] 00:41:50.461 } 00:41:50.461 ] 00:41:50.461 }' 00:41:50.461 18:52:18 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:50.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:50.461 18:52:18 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:50.461 18:52:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:50.461 [2024-10-08 18:52:18.780084] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:41:50.461 [2024-10-08 18:52:18.780193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429786 ] 00:41:50.461 [2024-10-08 18:52:18.893087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:50.721 [2024-10-08 18:52:19.110506] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:41:50.982 [2024-10-08 18:52:19.381882] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:51.552 18:52:19 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:51.552 18:52:19 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:41:51.552 18:52:19 keyring_file -- keyring/file.sh@121 -- # jq length 00:41:51.552 18:52:19 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:41:51.552 18:52:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:52.122 18:52:20 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:41:52.123 18:52:20 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:41:52.381 18:52:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:52.382 18:52:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:52.382 18:52:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:52.382 18:52:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:52.382 18:52:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:52.641 18:52:21 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:41:52.641 18:52:21 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:41:52.641 18:52:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:52.641 18:52:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:52.641 18:52:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:52.641 18:52:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:52.641 18:52:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:53.210 18:52:21 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:41:53.210 18:52:21 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:41:53.210 18:52:21 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:41:53.210 18:52:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:41:53.779 18:52:22 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:41:53.779 18:52:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:41:53.779 18:52:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.iihJRcgvDq /tmp/tmp.poeLf9PonO 00:41:53.779 18:52:22 keyring_file -- keyring/file.sh@20 -- # killprocess 1429786 00:41:53.779 18:52:22 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1429786 ']' 00:41:53.779 18:52:22 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1429786 00:41:53.779 18:52:22 keyring_file -- common/autotest_common.sh@955 -- # uname 00:41:53.779 18:52:22 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:53.779 18:52:22 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1429786 00:41:53.779 18:52:22 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:53.779 18:52:22 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:53.779 18:52:22 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1429786' 00:41:53.779 killing process with pid 1429786 00:41:53.779 18:52:22 keyring_file -- common/autotest_common.sh@969 -- # kill 1429786 00:41:53.779 Received shutdown signal, test time was about 1.000000 seconds 00:41:53.779 00:41:53.779 Latency(us) 00:41:53.779 [2024-10-08T16:52:22.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:53.779 [2024-10-08T16:52:22.316Z] =================================================================================================================== 00:41:53.779 [2024-10-08T16:52:22.316Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:53.779 18:52:22 keyring_file -- common/autotest_common.sh@974 -- # wait 1429786 00:41:54.039 18:52:22 keyring_file -- keyring/file.sh@21 -- # killprocess 1427252 00:41:54.039 18:52:22 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1427252 ']' 00:41:54.039 18:52:22 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1427252 00:41:54.039 18:52:22 keyring_file -- common/autotest_common.sh@955 -- # uname 00:41:54.039 18:52:22 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:54.039 18:52:22 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1427252 00:41:54.039 18:52:22 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:54.039 18:52:22 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:54.039 18:52:22 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1427252' 00:41:54.039 killing process with pid 1427252 00:41:54.039 18:52:22 keyring_file -- common/autotest_common.sh@969 -- # kill 1427252 00:41:54.039 18:52:22 keyring_file -- common/autotest_common.sh@974 -- # wait 1427252 00:41:54.610 00:41:54.610 real 0m26.109s 00:41:54.610 user 1m8.034s 00:41:54.610 sys 0m5.118s 00:41:54.610 18:52:23 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:54.610 18:52:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:54.610 ************************************ 00:41:54.610 END TEST keyring_file 00:41:54.610 ************************************ 00:41:54.870 18:52:23 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:41:54.870 18:52:23 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:54.870 18:52:23 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:54.870 18:52:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:54.870 18:52:23 -- common/autotest_common.sh@10 -- # set +x 00:41:54.870 ************************************ 00:41:54.870 START TEST keyring_linux 00:41:54.870 ************************************ 00:41:54.870 18:52:23 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:54.870 Joined session keyring: 531455486 00:41:54.870 * Looking for test storage... 00:41:54.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:54.870 18:52:23 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:54.870 18:52:23 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:41:54.870 18:52:23 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:54.870 18:52:23 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@345 -- # : 1 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:54.870 18:52:23 keyring_linux -- scripts/common.sh@368 -- # return 0 00:41:54.870 18:52:23 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:54.870 18:52:23 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:54.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.870 --rc genhtml_branch_coverage=1 00:41:54.870 --rc genhtml_function_coverage=1 00:41:54.870 --rc genhtml_legend=1 00:41:54.870 --rc geninfo_all_blocks=1 00:41:54.870 --rc geninfo_unexecuted_blocks=1 00:41:54.870 00:41:54.870 ' 00:41:54.870 18:52:23 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:54.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.870 --rc genhtml_branch_coverage=1 00:41:54.870 --rc genhtml_function_coverage=1 00:41:54.870 --rc genhtml_legend=1 00:41:54.870 --rc geninfo_all_blocks=1 00:41:54.870 --rc geninfo_unexecuted_blocks=1 00:41:54.870 00:41:54.870 ' 00:41:54.870 18:52:23 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:54.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.870 --rc genhtml_branch_coverage=1 00:41:54.870 --rc genhtml_function_coverage=1 00:41:54.870 --rc genhtml_legend=1 00:41:54.870 --rc geninfo_all_blocks=1 00:41:54.870 --rc geninfo_unexecuted_blocks=1 00:41:54.870 00:41:54.870 ' 00:41:54.870 18:52:23 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:54.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.870 --rc genhtml_branch_coverage=1 00:41:54.870 --rc genhtml_function_coverage=1 00:41:54.870 --rc genhtml_legend=1 00:41:54.870 --rc geninfo_all_blocks=1 00:41:54.870 --rc geninfo_unexecuted_blocks=1 00:41:54.870 00:41:54.870 ' 00:41:54.870 18:52:23 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:54.870 18:52:23 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:54.870 18:52:23 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:41:54.870 18:52:23 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:54.870 18:52:23 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:54.870 18:52:23 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:54.870 18:52:23 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:54.870 18:52:23 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:54.871 18:52:23 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:54.871 18:52:23 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:54.871 18:52:23 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:54.871 18:52:23 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:54.871 18:52:23 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:55.130 18:52:23 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:41:55.130 18:52:23 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:55.130 18:52:23 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:55.130 18:52:23 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:55.130 18:52:23 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.130 18:52:23 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.130 18:52:23 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.130 18:52:23 keyring_linux -- paths/export.sh@5 -- # export PATH 00:41:55.130 18:52:23 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:55.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:55.130 18:52:23 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:55.130 18:52:23 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:55.130 18:52:23 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:41:55.130 18:52:23 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:41:55.130 18:52:23 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:41:55.130 18:52:23 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@731 -- # python - 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:41:55.130 /tmp/:spdk-test:key0 00:41:55.130 18:52:23 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:41:55.130 18:52:23 keyring_linux -- nvmf/common.sh@731 -- # python - 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:41:55.130 18:52:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:41:55.130 /tmp/:spdk-test:key1 00:41:55.130 18:52:23 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1430407 00:41:55.130 18:52:23 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:55.130 18:52:23 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1430407 00:41:55.130 18:52:23 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1430407 ']' 00:41:55.130 18:52:23 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:55.130 18:52:23 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:55.130 18:52:23 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:55.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:55.130 18:52:23 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:55.130 18:52:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:55.130 [2024-10-08 18:52:23.655299] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:41:55.130 [2024-10-08 18:52:23.655409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430407 ] 00:41:55.389 [2024-10-08 18:52:23.767574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:55.648 [2024-10-08 18:52:23.987370] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:41:57.029 18:52:25 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:57.029 18:52:25 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:41:57.029 18:52:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:41:57.029 18:52:25 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:57.029 18:52:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:57.029 [2024-10-08 18:52:25.172540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:57.029 null0 00:41:57.029 [2024-10-08 18:52:25.206494] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:57.029 [2024-10-08 18:52:25.207437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:57.030 18:52:25 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:57.030 18:52:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:41:57.030 258849948 00:41:57.030 18:52:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:41:57.030 78804450 00:41:57.030 18:52:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1430556 00:41:57.030 18:52:25 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:41:57.030 18:52:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1430556 /var/tmp/bperf.sock 00:41:57.030 18:52:25 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1430556 ']' 00:41:57.030 18:52:25 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:57.030 18:52:25 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:57.030 18:52:25 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:57.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:57.030 18:52:25 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:57.030 18:52:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:57.030 [2024-10-08 18:52:25.334044] Starting SPDK v25.01-pre git sha1 865972bb6 / DPDK 24.03.0 initialization... 00:41:57.030 [2024-10-08 18:52:25.334125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430556 ] 00:41:57.030 [2024-10-08 18:52:25.448809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:57.289 [2024-10-08 18:52:25.642982] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:41:57.289 18:52:25 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:57.289 18:52:25 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:41:57.289 18:52:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:41:57.289 18:52:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:41:57.856 18:52:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:41:57.856 18:52:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:58.795 18:52:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:58.795 18:52:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:59.053 [2024-10-08 18:52:27.467504] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:59.054 nvme0n1 00:41:59.054 18:52:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:41:59.054 18:52:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:41:59.054 18:52:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:59.054 18:52:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:59.054 18:52:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:59.054 18:52:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:59.620 18:52:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:41:59.620 18:52:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:59.620 18:52:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:41:59.620 18:52:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:41:59.620 18:52:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:59.620 18:52:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:41:59.620 18:52:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:59.879 18:52:28 keyring_linux -- keyring/linux.sh@25 -- # sn=258849948 00:41:59.879 18:52:28 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:41:59.879 18:52:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:59.879 18:52:28 keyring_linux -- keyring/linux.sh@26 -- # [[ 258849948 == \2\5\8\8\4\9\9\4\8 ]] 00:41:59.879 18:52:28 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 258849948 00:41:59.879 18:52:28 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:41:59.879 18:52:28 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:00.138 Running I/O for 1 seconds... 00:42:01.331 3841.00 IOPS, 15.00 MiB/s 00:42:01.331 Latency(us) 00:42:01.331 [2024-10-08T16:52:29.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:01.331 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:01.331 nvme0n1 : 1.03 3838.99 15.00 0.00 0.00 32776.35 8204.14 39030.33 00:42:01.331 [2024-10-08T16:52:29.868Z] =================================================================================================================== 00:42:01.331 [2024-10-08T16:52:29.868Z] Total : 3838.99 15.00 0.00 0.00 32776.35 8204.14 39030.33 00:42:01.331 { 00:42:01.331 "results": [ 00:42:01.331 { 00:42:01.331 "job": "nvme0n1", 00:42:01.331 "core_mask": "0x2", 00:42:01.331 "workload": "randread", 00:42:01.331 "status": "finished", 00:42:01.331 "queue_depth": 128, 00:42:01.331 "io_size": 4096, 00:42:01.331 "runtime": 1.034127, 00:42:01.331 "iops": 3838.9868942596026, 00:42:01.331 "mibps": 14.996042555701573, 00:42:01.331 "io_failed": 0, 00:42:01.331 "io_timeout": 0, 00:42:01.331 "avg_latency_us": 32776.34561432969, 00:42:01.331 "min_latency_us": 8204.136296296296, 00:42:01.331 "max_latency_us": 39030.328888888886 00:42:01.331 } 00:42:01.331 ], 00:42:01.331 "core_count": 1 00:42:01.331 } 00:42:01.331 18:52:29 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:01.331 18:52:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:01.590 18:52:30 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:01.590 18:52:30 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:01.590 18:52:30 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:01.590 18:52:30 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:01.590 18:52:30 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:01.590 18:52:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:02.157 18:52:30 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:02.157 18:52:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:02.157 18:52:30 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:02.157 18:52:30 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:02.157 18:52:30 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:42:02.157 18:52:30 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:02.157 18:52:30 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:02.157 18:52:30 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:02.157 18:52:30 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:02.157 18:52:30 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:02.157 18:52:30 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:02.157 18:52:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:03.095 [2024-10-08 18:52:31.277761] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:03.095 [2024-10-08 18:52:31.277824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x727410 (107): Transport endpoint is not connected 00:42:03.095 [2024-10-08 18:52:31.278797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x727410 (9): Bad file descriptor 00:42:03.095 [2024-10-08 18:52:31.279789] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:03.095 [2024-10-08 18:52:31.279838] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:03.095 [2024-10-08 18:52:31.279873] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:03.095 [2024-10-08 18:52:31.279910] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:03.095 request: 00:42:03.095 { 00:42:03.095 "name": "nvme0", 00:42:03.095 "trtype": "tcp", 00:42:03.095 "traddr": "127.0.0.1", 00:42:03.095 "adrfam": "ipv4", 00:42:03.095 "trsvcid": "4420", 00:42:03.095 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:03.095 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:03.095 "prchk_reftag": false, 00:42:03.095 "prchk_guard": false, 00:42:03.095 "hdgst": false, 00:42:03.095 "ddgst": false, 00:42:03.095 "psk": ":spdk-test:key1", 00:42:03.095 "allow_unrecognized_csi": false, 00:42:03.095 "method": "bdev_nvme_attach_controller", 00:42:03.095 "req_id": 1 00:42:03.095 } 00:42:03.095 Got JSON-RPC error response 00:42:03.095 response: 00:42:03.095 { 00:42:03.095 "code": -5, 00:42:03.095 "message": "Input/output error" 00:42:03.095 } 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@33 -- # sn=258849948 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 258849948 00:42:03.095 1 links removed 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@33 -- # sn=78804450 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 78804450 00:42:03.095 1 links removed 00:42:03.095 18:52:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1430556 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1430556 ']' 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1430556 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1430556 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1430556' 00:42:03.095 killing process with pid 1430556 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@969 -- # kill 1430556 00:42:03.095 Received shutdown signal, test time was about 1.000000 seconds 00:42:03.095 00:42:03.095 Latency(us) 00:42:03.095 [2024-10-08T16:52:31.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:03.095 [2024-10-08T16:52:31.632Z] =================================================================================================================== 00:42:03.095 [2024-10-08T16:52:31.632Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:03.095 18:52:31 keyring_linux -- common/autotest_common.sh@974 -- # wait 1430556 00:42:03.354 18:52:31 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1430407 00:42:03.354 18:52:31 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1430407 ']' 00:42:03.354 18:52:31 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1430407 00:42:03.354 18:52:31 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:03.354 18:52:31 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:03.354 18:52:31 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1430407 00:42:03.354 18:52:31 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:03.354 18:52:31 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:03.354 18:52:31 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1430407' 00:42:03.355 killing process with pid 1430407 00:42:03.355 18:52:31 keyring_linux -- common/autotest_common.sh@969 -- # kill 1430407 00:42:03.355 18:52:31 keyring_linux -- common/autotest_common.sh@974 -- # wait 1430407 00:42:04.293 00:42:04.293 real 0m9.343s 00:42:04.293 user 0m19.158s 00:42:04.293 sys 0m2.461s 00:42:04.293 18:52:32 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:04.293 18:52:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:04.293 ************************************ 00:42:04.293 END TEST keyring_linux 00:42:04.293 ************************************ 00:42:04.293 18:52:32 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:42:04.293 18:52:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:04.293 18:52:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:04.293 18:52:32 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:42:04.293 18:52:32 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:42:04.293 18:52:32 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:42:04.293 18:52:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:04.293 18:52:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:04.293 18:52:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:04.293 18:52:32 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:42:04.293 18:52:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:04.293 18:52:32 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:42:04.293 18:52:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:04.293 18:52:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:04.293 18:52:32 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:42:04.293 18:52:32 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:42:04.293 18:52:32 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:42:04.293 18:52:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:04.293 18:52:32 -- common/autotest_common.sh@10 -- # set +x 00:42:04.293 18:52:32 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:42:04.293 18:52:32 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:42:04.293 18:52:32 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:42:04.293 18:52:32 -- common/autotest_common.sh@10 -- # set +x 00:42:06.832 INFO: APP EXITING 00:42:06.832 INFO: killing all VMs 00:42:06.832 INFO: killing vhost app 00:42:06.832 INFO: EXIT DONE 00:42:08.210 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:42:08.210 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:42:08.210 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:42:08.470 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:42:08.470 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:42:08.470 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:42:08.470 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:42:08.470 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:42:08.470 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:42:08.470 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:42:08.470 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:42:08.470 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:42:08.470 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:42:08.470 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:42:08.470 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:42:08.470 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:42:08.470 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:42:10.372 Cleaning 00:42:10.372 Removing: /var/run/dpdk/spdk0/config 00:42:10.372 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:10.372 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:10.372 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:10.373 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:10.373 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:10.373 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:10.373 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:10.373 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:10.373 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:10.373 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:10.373 Removing: /var/run/dpdk/spdk1/config 00:42:10.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:10.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:10.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:10.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:10.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:10.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:10.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:10.373 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:10.373 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:10.373 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:10.373 Removing: /var/run/dpdk/spdk2/config 00:42:10.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:10.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:10.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:10.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:10.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:10.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:10.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:10.373 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:10.373 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:10.373 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:10.373 Removing: /var/run/dpdk/spdk3/config 00:42:10.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:10.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:10.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:10.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:10.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:10.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:10.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:10.373 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:10.373 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:10.373 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:10.373 Removing: /var/run/dpdk/spdk4/config 00:42:10.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:10.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:10.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:10.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:10.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:10.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:10.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:10.373 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:10.373 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:10.373 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:10.373 Removing: /dev/shm/bdev_svc_trace.1 00:42:10.373 Removing: /dev/shm/nvmf_trace.0 00:42:10.373 Removing: /dev/shm/spdk_tgt_trace.pid1058344 00:42:10.632 Removing: /var/run/dpdk/spdk0 00:42:10.632 Removing: /var/run/dpdk/spdk1 00:42:10.632 Removing: /var/run/dpdk/spdk2 00:42:10.632 Removing: /var/run/dpdk/spdk3 00:42:10.632 Removing: /var/run/dpdk/spdk4 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1056408 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1057331 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1058344 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1058933 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1059679 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1059892 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1060928 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1061249 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1061659 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1063243 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1064297 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1064748 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1065068 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1065390 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1065748 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1065912 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1066135 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1066390 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1066958 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1070128 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1070424 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1070779 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1070978 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1071551 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1071685 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1072374 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1072522 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1072817 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1072961 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1073255 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1073394 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1074020 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1074177 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1074455 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1076980 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1079831 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1087453 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1087863 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1091031 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1091197 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1094243 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1098506 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1101514 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1109004 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1114675 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1115990 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1116668 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1128481 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1130934 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1160362 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1163795 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1168431 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1173101 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1173211 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1173753 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1174405 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1174942 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1175343 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1175467 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1175607 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1175746 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1175748 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1176405 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1177066 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1177600 00:42:10.632 Removing: /var/run/dpdk/spdk_pid1177994 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1178109 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1178265 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1179748 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1180780 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1186231 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1232477 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1236064 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1237241 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1238691 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1238929 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1239239 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1239394 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1240163 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1241680 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1243186 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1243877 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1245876 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1246435 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1247003 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1249801 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1253470 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1253471 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1253472 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1255763 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1261475 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1264120 00:42:10.892 Removing: /var/run/dpdk/spdk_pid1268160 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1269226 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1270316 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1271536 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1274625 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1277142 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1281546 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1281663 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1284721 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1284969 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1285112 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1285375 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1285395 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1288412 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1288753 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1291581 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1294186 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1298067 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1301736 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1309781 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1314399 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1314402 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1330509 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1331043 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1331571 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1332112 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1333080 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1333616 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1334157 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1334695 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1337464 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1337727 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1341555 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1341735 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1345248 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1348398 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1356325 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1356724 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1359366 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1359525 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1362484 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1366764 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1369705 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1377286 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1382895 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1384110 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1384895 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1396559 00:42:10.893 Removing: /var/run/dpdk/spdk_pid1398929 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1400901 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1406325 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1406334 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1409495 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1410803 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1412182 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1413054 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1414487 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1415370 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1421642 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1422029 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1422425 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1423989 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1424380 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1424780 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1427252 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1427384 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1429786 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1430407 00:42:11.152 Removing: /var/run/dpdk/spdk_pid1430556 00:42:11.152 Clean 00:42:11.152 18:52:39 -- common/autotest_common.sh@1451 -- # return 0 00:42:11.152 18:52:39 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:42:11.152 18:52:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:11.152 18:52:39 -- common/autotest_common.sh@10 -- # set +x 00:42:11.152 18:52:39 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:42:11.152 18:52:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:11.152 18:52:39 -- common/autotest_common.sh@10 -- # set +x 00:42:11.152 18:52:39 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:11.152 18:52:39 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:11.152 18:52:39 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:11.152 18:52:39 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:42:11.152 18:52:39 -- spdk/autotest.sh@394 -- # hostname 00:42:11.153 18:52:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:11.721 geninfo: WARNING: invalid characters removed from testname! 00:42:58.386 18:53:25 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:04.946 18:53:32 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:09.182 18:53:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:13.373 18:53:41 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:18.656 18:53:46 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:22.857 18:53:51 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:28.141 18:53:55 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:28.141 18:53:55 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:43:28.141 18:53:55 -- common/autotest_common.sh@1681 -- $ lcov --version 00:43:28.141 18:53:55 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:43:28.141 18:53:56 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:43:28.141 18:53:56 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:43:28.141 18:53:56 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:43:28.141 18:53:56 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:43:28.141 18:53:56 -- scripts/common.sh@336 -- $ IFS=.-: 00:43:28.141 18:53:56 -- scripts/common.sh@336 -- $ read -ra ver1 00:43:28.141 18:53:56 -- scripts/common.sh@337 -- $ IFS=.-: 00:43:28.141 18:53:56 -- scripts/common.sh@337 -- $ read -ra ver2 00:43:28.141 18:53:56 -- scripts/common.sh@338 -- $ local 'op=<' 00:43:28.141 18:53:56 -- scripts/common.sh@340 -- $ ver1_l=2 00:43:28.141 18:53:56 -- scripts/common.sh@341 -- $ ver2_l=1 00:43:28.141 18:53:56 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:43:28.141 18:53:56 -- scripts/common.sh@344 -- $ case "$op" in 00:43:28.141 18:53:56 -- scripts/common.sh@345 -- $ : 1 00:43:28.141 18:53:56 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:43:28.141 18:53:56 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:28.141 18:53:56 -- scripts/common.sh@365 -- $ decimal 1 00:43:28.141 18:53:56 -- scripts/common.sh@353 -- $ local d=1 00:43:28.141 18:53:56 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:43:28.141 18:53:56 -- scripts/common.sh@355 -- $ echo 1 00:43:28.141 18:53:56 -- scripts/common.sh@365 -- $ ver1[v]=1 00:43:28.141 18:53:56 -- scripts/common.sh@366 -- $ decimal 2 00:43:28.141 18:53:56 -- scripts/common.sh@353 -- $ local d=2 00:43:28.141 18:53:56 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:43:28.141 18:53:56 -- scripts/common.sh@355 -- $ echo 2 00:43:28.141 18:53:56 -- scripts/common.sh@366 -- $ ver2[v]=2 00:43:28.141 18:53:56 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:43:28.141 18:53:56 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:43:28.141 18:53:56 -- scripts/common.sh@368 -- $ return 0 00:43:28.141 18:53:56 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:28.141 18:53:56 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:43:28.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:28.141 --rc genhtml_branch_coverage=1 00:43:28.141 --rc genhtml_function_coverage=1 00:43:28.141 --rc genhtml_legend=1 00:43:28.141 --rc geninfo_all_blocks=1 00:43:28.142 --rc geninfo_unexecuted_blocks=1 00:43:28.142 00:43:28.142 ' 00:43:28.142 18:53:56 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:43:28.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:28.142 --rc genhtml_branch_coverage=1 00:43:28.142 --rc genhtml_function_coverage=1 00:43:28.142 --rc genhtml_legend=1 00:43:28.142 --rc geninfo_all_blocks=1 00:43:28.142 --rc geninfo_unexecuted_blocks=1 00:43:28.142 00:43:28.142 ' 00:43:28.142 18:53:56 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:43:28.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:28.142 --rc genhtml_branch_coverage=1 00:43:28.142 --rc genhtml_function_coverage=1 00:43:28.142 --rc genhtml_legend=1 00:43:28.142 --rc geninfo_all_blocks=1 00:43:28.142 --rc geninfo_unexecuted_blocks=1 00:43:28.142 00:43:28.142 ' 00:43:28.142 18:53:56 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:43:28.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:28.142 --rc genhtml_branch_coverage=1 00:43:28.142 --rc genhtml_function_coverage=1 00:43:28.142 --rc genhtml_legend=1 00:43:28.142 --rc geninfo_all_blocks=1 00:43:28.142 --rc geninfo_unexecuted_blocks=1 00:43:28.142 00:43:28.142 ' 00:43:28.142 18:53:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:28.142 18:53:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:43:28.142 18:53:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:43:28.142 18:53:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:28.142 18:53:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:28.142 18:53:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.142 18:53:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.142 18:53:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.142 18:53:56 -- paths/export.sh@5 -- $ export PATH 00:43:28.142 18:53:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.142 18:53:56 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:43:28.142 18:53:56 -- common/autobuild_common.sh@486 -- $ date +%s 00:43:28.142 18:53:56 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728406436.XXXXXX 00:43:28.142 18:53:56 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728406436.gN0mIu 00:43:28.142 18:53:56 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:43:28.142 18:53:56 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:43:28.142 18:53:56 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:43:28.142 18:53:56 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:43:28.142 18:53:56 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:43:28.142 18:53:56 -- common/autobuild_common.sh@502 -- $ get_config_params 00:43:28.142 18:53:56 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:43:28.142 18:53:56 -- common/autotest_common.sh@10 -- $ set +x 00:43:28.142 18:53:56 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:43:28.142 18:53:56 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:43:28.142 18:53:56 -- pm/common@17 -- $ local monitor 00:43:28.142 18:53:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:28.142 18:53:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:28.142 18:53:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:28.142 18:53:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:28.142 18:53:56 -- pm/common@21 -- $ date +%s 00:43:28.142 18:53:56 -- pm/common@25 -- $ sleep 1 00:43:28.142 18:53:56 -- pm/common@21 -- $ date +%s 00:43:28.142 18:53:56 -- pm/common@21 -- $ date +%s 00:43:28.142 18:53:56 -- pm/common@21 -- $ date +%s 00:43:28.142 18:53:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728406436 00:43:28.142 18:53:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728406436 00:43:28.142 18:53:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728406436 00:43:28.142 18:53:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728406436 00:43:28.142 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728406436_collect-vmstat.pm.log 00:43:28.142 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728406436_collect-cpu-temp.pm.log 00:43:28.142 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728406436_collect-cpu-load.pm.log 00:43:28.142 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728406436_collect-bmc-pm.bmc.pm.log 00:43:28.712 18:53:57 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:43:28.712 18:53:57 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:43:28.712 18:53:57 -- spdk/autopackage.sh@14 -- $ timing_finish 00:43:28.712 18:53:57 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:28.712 18:53:57 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:28.712 18:53:57 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:28.971 18:53:57 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:43:28.971 18:53:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:43:28.971 18:53:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:43:28.971 18:53:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:28.971 18:53:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:43:28.971 18:53:57 -- pm/common@44 -- $ pid=1443214 00:43:28.971 18:53:57 -- pm/common@50 -- $ kill -TERM 1443214 00:43:28.971 18:53:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:28.971 18:53:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:43:28.971 18:53:57 -- pm/common@44 -- $ pid=1443216 00:43:28.971 18:53:57 -- pm/common@50 -- $ kill -TERM 1443216 00:43:28.971 18:53:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:28.971 18:53:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:43:28.971 18:53:57 -- pm/common@44 -- $ pid=1443218 00:43:28.971 18:53:57 -- pm/common@50 -- $ kill -TERM 1443218 00:43:28.971 18:53:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:28.971 18:53:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:43:28.971 18:53:57 -- pm/common@44 -- $ pid=1443240 00:43:28.971 18:53:57 -- pm/common@50 -- $ sudo -E kill -TERM 1443240 00:43:28.971 + [[ -n 979723 ]] 00:43:28.971 + sudo kill 979723 00:43:28.982 [Pipeline] } 00:43:28.998 [Pipeline] // stage 00:43:29.004 [Pipeline] } 00:43:29.015 [Pipeline] // timeout 00:43:29.021 [Pipeline] } 00:43:29.036 [Pipeline] // catchError 00:43:29.042 [Pipeline] } 00:43:29.058 [Pipeline] // wrap 00:43:29.066 [Pipeline] } 00:43:29.078 [Pipeline] // catchError 00:43:29.087 [Pipeline] stage 00:43:29.090 [Pipeline] { (Epilogue) 00:43:29.103 [Pipeline] catchError 00:43:29.104 [Pipeline] { 00:43:29.117 [Pipeline] echo 00:43:29.119 Cleanup processes 00:43:29.126 [Pipeline] sh 00:43:29.415 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:29.415 1443395 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:43:29.415 1443526 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:29.431 [Pipeline] sh 00:43:29.719 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:29.719 ++ grep -v 'sudo pgrep' 00:43:29.719 ++ awk '{print $1}' 00:43:29.719 + sudo kill -9 1443395 00:43:29.731 [Pipeline] sh 00:43:30.020 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:56.596 [Pipeline] sh 00:43:56.885 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:57.145 Artifacts sizes are good 00:43:57.160 [Pipeline] archiveArtifacts 00:43:57.168 Archiving artifacts 00:43:57.371 [Pipeline] sh 00:43:57.696 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:57.712 [Pipeline] cleanWs 00:43:57.722 [WS-CLEANUP] Deleting project workspace... 00:43:57.722 [WS-CLEANUP] Deferred wipeout is used... 00:43:57.729 [WS-CLEANUP] done 00:43:57.731 [Pipeline] } 00:43:57.747 [Pipeline] // catchError 00:43:57.759 [Pipeline] sh 00:43:58.040 + logger -p user.info -t JENKINS-CI 00:43:58.049 [Pipeline] } 00:43:58.062 [Pipeline] // stage 00:43:58.067 [Pipeline] } 00:43:58.080 [Pipeline] // node 00:43:58.085 [Pipeline] End of Pipeline 00:43:58.121 Finished: SUCCESS